Page 1,903«..1020..1,9021,9031,9041,905..1,9101,920..»

What is the Potential for Digital Twins in Healthcare? – HIT Consultant

David Talby, CTO, John Snow Labs

Digital twins are virtual representations of an object or system that spans its lifecycle, is updated from real-time data, and use simulation, machine learning and reasoning to help decision-making (IBM). In most cases, this helps data scientists understand how products are operating in production environments and anticipate how they may behave overtime. But what happens when a digital twin is that of a human being?

By using digital twins to model a person, you can use technologies like natural language processing (NLP) to better understand data and uncover other useful insights that will help improve use cases from customer experience to patient care. Today, were simply generating more data than ever before. Digital twins can be useful in synthesizing this information to provide actionable insights.

As such, there are few fields digital twins can be more helpful in than healthcare. Take a visit to your primary care physician, for example. They will have a baseline understanding of you your history, medications you take, allergies, and other factors. If you then go to see a specialist, they may ask many of the same repetitive questions, and remake inferences and deductions that have been done before. But beyond convenience and time savings, digital clones can substantially help with accuracy.

Having a good virtual replica of a patient enables medical professionals to dig down specific medications, health conditions, and even social determinants of health that may impact care. Greater detail and context enables providers to make better clinical decisions, and its all being done behind the scenes, thanks to advances in artificial intelligence (AI) and machine learning (ML).

Digital Twins in Production

Digital clones or digital twins can greatly benefit the healthcare system, and were already starting to see them in use. Kaiser Permanente uses digital twins through a system that improves patient flow within a hospital. It achieves this by combining structured and unstructured data to build a more complete view of each patient to anticipate what their needs will be at the hospital. In another instance, Roche uses digital twins to help securely integrate and display relevant aggregated data about cancer patients into a single, holistic patient timeline.

Digital twins are already at work in some of the largest healthcare organizations in the world, but their potential doesnt stop with the existing use cases. There are many other applications for digital twins at play, and they span from practical everyday use to functions that sound more like science fiction than reality. Here are some additional areas digital twins can be particularly useful in healthcare:

Summarizing Patient Data: Providers are experiencing information overload with the amount of data in todays healthcare system. From electronic health records (EHRs) to doctors notes to diagnostic imaging, it can be a challenge to connect the disparate data structured tables, unstructured text, medical images, sensors and more associated with an individual patient. Consider a patient with a cancerous tumor along with other underlying conditions. Typically, oncologists and other specialists will meet to determine the next steps in treatment, whether it be surgery, medication, or another protocol. Integrating all this data into a unified, relevant, and summarized timeline can be done using a combination of natural language processing (NLP), computer vision (CV), and knowledge graph (KG) techniques today.

Accelerating Precision Medicine: Precision medicine is mostly applied in the areas of cardiology and oncology, dealing with serious conditions, as cancer and heart disease. Sometimes, instead of recommending an aggressive treatment like chemotherapy, its important to see if a patient has certain genomic biomarkers that can inform doctors if another approach may work better for that patient. Genetic profiling is useful to uncover these insights, helping doctors better understand a given patients tumor, labs, genomics, history, and other pertinent details to reach an optimal decision. As a result, the clinician can provide a more personalized approach to care. However, to achieve this, you need to aggregate much more information about the patient. By building a digital twin, you can compare an individual to other patients similar in clinically important ways to see if there are genomic similarities and how certain treatments have impacted them.

Process Improvement: Improving organizational performance, thereby improving patient outcomes or population health, requires a high level of specificity. For example, if your goal is to reduce the length of a patients hospital stay, its imperative to understand many other factors about their condition. Through structured data, you can find information, like whether the patient has a chronic condition and what medications they were taking, or whether or not they have insurance. But some of the considerations that really matter in terms of the duration of the patients hospital stay how they are eating, feeling, sleeping, coping, moving, etc. can only be found in free-text data. Creating a digital twin to anticipate patient needs and the length of their stay can be very valuable.

Whats Next for Digital Twins

Some medical devices have the capabilities of producing digital twins of specific organs or conditions so doctors can better diagnose them. Areas like NLP can be a great help here if you have a patient with a chronic condition (Asthma, COPD, mental health issues, and others). For acute issues especially in oncology, cardiology, and psychiatry digital twins can offer a higher level of detail. For example, creating the digital twin of a patients heart enables a doctor to see exactly whats going on whether there is scarring from previous surgeries or an abnormality that needs to be inspected further and make better decisions before an operation, rather than during. This can mean a world of difference for patient outcomes.

Well start to see more advanced use cases for digital twins in the coming years. But to truly live up to the hype, its crucial that we move beyond simply collecting and analyzing only structured data. Recent advances in deep learning and transfer learning have made it possible to extract information from imaging and free-text data, serving as the connective tissue between what can be found in EHRs and other information, like radiology images and medical documents of all types. Only then can we begin to construct a meaningful digital twin to uncover useful insights that will help improve hospital operations and patient care.

About David Talby

David Talby, Ph.D., MBA, is the CTO of John Snow Labs, the AI and NLP for healthcare companies provide state-of-the-art software, models, and data to help healthcare and life science organizations put AI to good use. He has spent his career making AI, big data and data scientists solve real-world problems in healthcare, life science and related fields.

The rest is here:
What is the Potential for Digital Twins in Healthcare? - HIT Consultant

Read More..

Amazons Werner Vogels: Enterprises are more daring than you might think – Protocol

When AWS unveiled Lambda in 2014, Werner Vogels thought the serverless compute service would be the domain of young, more tech-savvy businesses.

But it was enterprises that flocked to serverless first, Amazons longtime chief technology officer told Protocol in an interview last week.

For them, it was immediately obvious what the benefits were and how you only pay for the five microseconds that this code runs, and any idle is not being charged to you, Vogels said. And you don't have to worry about reliability and security and multi-[availability zone] and all these things that then go out of the window. That was really an eye-opener for me this idea that we sometimes have in our head that sort of the young businesses are more technologically advanced and moving faster. Clearly in the area of serverless, that was not the case.

AWS Lambda launched into general availability in 2015, and more than a million customers are using it today, according to AWS.

Vogels gave Protocol a rundown on AWS Lambda and serverless computing, which allows customers to build and run applications and services without provisioning or managing servers. He also talked about Amazon CodeWhisperer, AWS new machine learning-powered coding tool, launched in preview in June; how artificial intelligence and ML are changing developers lives; and his thoughts on AWS providing customers with primitives versus higher-level managed services.

This interview has been edited and condensed for clarity.

So what's the state of the state on AWS Lambda and how it's helping customers, and are there any new features that we can expect?

You'll see a whole range of different migrations happening. We've had folks from Capital One that migrated old mainframe codes to Lambda. [IRobot, which Amazon announced plans to acquire on Friday], the folks that make Roomba, the automatic [vacuum] cleaner, have their complete back end running as serverless because, for example, that's a service that their customers don't pay for, and as such, they really wanted to minimize their costs yet provide a good service. There's a whole range of different projects happening and whether that is pre-processing images at some telescope deep in Chile, all the way up to monitoring Snowcones running in the International Space Station, where they were in Lambda on that device as well and actually can do processing of imagery and things like that. It's become quite pervasive in that sense.

Now, the one thing is, of course, if you have existing code, and you want to move over to the cloud moving over to a virtual machine is easy it's all in the same environment that you had on-premises. If you want to decompose the application that you had, don't want to do too many code changes, probably containers are a better target for that.

But for quite a few of our customers that really want to start from scratch, but sort of really innovate and really think about [what] event-driven architectures look like, serverless becomes quickly the sudden default target for them. Mostly also because it's not only that we see significant reduction in cost for our customers, but also a significant reduction in their carbon footprints, because we're able to do much better packing on energy than customers would be able to do by themselves. We now also run serverless on our Graviton processors, so you'll see easily a 40% reduction in cost in energy usage.

For me, serverless means that our customers don't have to think about security, reliability, managing performance, managing scale, doing failover all those kinds of things and really controlling costs.

But always I'm a bit ambivalent about the word serverless, mostly because many people associate that with when we launched Lambda. But in essence, the first service that we launched, S3, also is really serverless. For me, serverless means that our customers don't have to think about security, reliability, managing performance, managing scale, doing failover all those kinds of things and really controlling costs. And so, in essence, almost all services at AWS are serverless by nature. If you think about DynamoDB [a serverless NoSQL database], or if you think about Neptune [a graph database service] or any of the other services that we have, most of them are serverless because you don't have to think about sort of provisioning them, managing them. That's all done for you.

Can you talk about the value of CodeWhisperer and what you think is the next big thing for or the future of low-code/no-code?

For me, CodeWhisperer is more an assistant to a developer. There's a number of application areas where I think machine learning really shines and it is sort of augmenting professionals by helping them, taking away mundane tasks. And we already did that, of course, in AWS. If you think about development, there's CodeGuru and DevOps Guru, which are both already machine-learning services to help customers with, on one hand, operations, and the other one sort of doing the early security checks during the development process.

CodeWhisperer even takes that a step further, where if you look how our developers develop, there's quite a few mundane tasks where you will go search on the web for a piece of code how do we do [single sign-on] login into X, Y or Z? Most people will just cut and paste or do a little translation. If that was in Python and you need to actually write it in TypeScript, we may do a translation on that.

There's a lot of work, actually, that developers do in that particular area. So we thought that we could really help our customers there by using machine learning to look at the complete base of, on one hand, the AWS code, the Amazon code and all the open-source code that is out there, and then do a qualitative test on that, and then include it into this body of work where we can easily help customers by just writing some plain text, and then saying, I want a [single sign-on] log-on here, and then the code automatically appears. And with that, we can do checks for security, we can do checks for bias. There's lots of other things that are now possible because we're basically assisting the developer in being more efficient and actually writing the code that they really want to write.

When we launched Lambda, I said the only code that will be written in the future is business logic. Well, it turns out we're still not completely there, but tools like CodeWhisperer definitely help us to get on that path because you can focus on what's the unique code that you need to write for the application that you have, instead of the same code that everybody else needs to write.

People really like it. It's also something that we continuously improve. This is not a standing-still product. As we look at more code, as we get more feedback, the service improves.

If I think about software developers, it's one of the few jobs in the world where you can be truly creative and can go to work and create something new every morning. However, there's quite a bit of heavy lifting still around that [that] sort of has nothing to do with your creativity or your ability to solve problems. With CodeWhisperer, we really tried to take the heavy lifting away so that people can focus on the creativity part of the development job, and I think anything we can do there, developers like.

In your tech predictions for 2022, you said this is the year when artificial intelligence and machine learning take on the undifferentiated heavy lifting in the lives of developers. Can you just expand on that, and how AWS is helping that?

When you think about CodeWhisperer and CodeGuru and DevOps Guru or Copilot from GitHub this is just the beginning of seeing the application area of machine learning to augment humans. Whether there is a radiologist somewhere that is late at night looking at imagery and gets help from machine learning to compare these images or whether it's a developer, we're really at the cusp of how machine learning will accelerate the way that we can build digital systems.

I was in Germany not that long ago, and there the government told me that they have 80,000 open IT positions. With all the scarceness in the world of labor, anything which we can do to make the life of developers easier so that they're more productive, that it makes it easier for people that do not have a four-year computer science degree to actually get started in the IT world, anything we can do there will benefit all the enterprises in the world.

What's another developer problem that you're trying to solve, or what are developers asking AWS for?

If you're an organization like AWS or Amazon or quite a few other organizations around the world, you make use of the DevOps principle, where basically your developers also have operational tasks. If you do operations, there's information that is coming from 10 or 20 different sides. There's log files, there's metrics, there's dashboards and actually tying that information together and analyzing the massive amounts of log files that are being produced by systems in real time, surfacing that to the operators, showing that there may be potential problems here and then give context around it because normally these log files are pretty cryptic. So what we do with DevOps Guru, for example, is provide context around it such that the operators can immediately start taking action, looking for what [the] root cause of particular problems are. So we're looking at all of the different aspects of development and operations to see what are the kind of things that we can build to help customers there.

At AWS re:Invent last year, you put up a slide that read primitives, not frameworks, and you said AWS gives customers primitives or simple machines, not frameworks. Meanwhile, Google Cloud and Microsoft are offering these sort of larger, chunkier blocks such as managed services where customers don't have to do the heavy lifting, and AWS also seems to be selling more of them as well.

Let me clarify that. It mostly has to do also with sort of the speed of innovation of AWS.

Last year, we launched more than 3,000 features and services. And so why are we still looking at these fine-ingrained building blocks? Let me go back to the beginning of AWS when we started then, how software companies at that moment were providing infrastructure or platforms was basically that they would give developers everything [but] the kitchen sink on Day One. And they would tell you, "This is how you shall develop software on this platform." Given that these platforms took quite a while to develop, basically what you operate is a platform that is already five years old, that is looking at five years back.

Werner Vogels gives his keynote at AWS re:Invent 2021. Photo: Amazon Web Services, Inc.

We knew that if cloud would really be effective, development would change radically. Development would indeed be able to scale quicker and make use of multiple availability zones and many different types of databases and things like that. So we needed to make sure that we were not building things from the past, but that we were building for how our customers would want to build in 2025. To do that, you don't give them everything and tell them what to do. You give them small building blocks, and that's what I mean by primitives. And all these small building blocks together make a very rich ecosystem for developers to choose from.

Now, quite a few, especially the more tech-savvy companies, are more than happy to put these building blocks together themselves. For example, if you want to build a data lake, we have to use Glue [a serverless data integration service], we have to use S3, maybe some Redshift, Kinesis for ingestion, Athena for ad hoc analytics. I think there's quite a few customers that are building these things by themselves.

But then there's a whole category of customers that just want a data lake. They don't want to think about Glue and S3 and Kinesis, so we give them a service or solution called Lake Formation. That automatically grabs all these things together and gives them this higher-level component.

Now the fact that we are delivering these higher-level solutions, for example, some customers just want a backup solution, and they don't want to think about how to move things into S3 and then do some intelligent tiering [so] that if this data isn't accessed in two weeks, then it is being moved into cold storage. They don't want to think about that. They just want a backup solution. And so for that, we provide them some backup. So we do have these higher-level services. It's more managed-style services for you, but they're all still based on the primitives that sit underneath there. So whether you want to start with Lake Formation and later on maybe start tweaking things under the covers, that's still possible for you. While we are providing these higher-level components, where customers need to have less worry about which components can fit together, we still provide the underlying components to the developers as well.

Is quantum computing something that enterprise CTOs should be keeping their eye on? Do you expect there to be an enterprise use for it, or will it be a domain just for researchers, or is it just too far out to surmise?

There is a back-and-forth there. If I look at some of the newer developments, it's clearly research oriented. The reason for us to provide Braket, which is our quantum compute service, is that customers generally start experimenting with the different types of hardware that are out there. And there's typical usage there. It's life sciences, it's oil and gas. All of these companies are already investigating whether they could see significant speed-ups if they would transform their algorithms into things that could run on a quantum machine.

Now, there's a major difference between, let's say, traditional development and quantum development. The tools, the compilers, the software principles, the books, the documentation for traditional development that's huge, you need great support.

In quantum, I think what we'll see in the coming four or five years, as I listen to the Amazon researchers working on this, [is that] much of the work will not only go into hardware, but also how to provide better software support around it, such that development for these types of machines becomes easier or even goes at the same level as traditional machines. But one of the things that I think is very, very clear is that we're not going to be able to solve new problems necessarily with quantum computing; we're just going to be able to solve old problems much, much faster. That's why the life sciences companies and health care and companies that are very interested in the high-performance compute are experimenting with quantum because that could accelerate their algorithms, maybe by orders of magnitude. But, we still have to see the results of that. So I'm keeping a very close eye on it, because I think there may be very interesting workloads and application areas in the future.

Read more:
Amazons Werner Vogels: Enterprises are more daring than you might think - Protocol

Read More..

Google TV preparing to add its own free live TV, heres the channel list – 9to5Google

After first being reported nearly a year ago, Google TV is now making tangible progress toward launching 50 channels worth of free, ad-supported streaming content, and we have your first look at the channel list.

About APK Insight: In this APK Insight post, weve decompiled the latest version of an application that Google uploaded to the Play Store. When we decompile these files (called APKs, in the case of Android apps), were able to see various lines of code within that hint at possible future features. Keep in mind that Google may or may not ever ship these features, and our interpretation of what they are may be imperfect. Well try to enable those that are closer to being finished, however, to show you how theyll look in the case that they do ship. With that in mind, read on.

With Google TV, the successor/redesign of Android TV, the company has been looking to make its platform smarter and more competitive with other smart TV options. One of the benefits of owning a Samsung Smart TV is access to Samsung TV Plus, which features over 200 channels worth of free content, supported by advertisements.

By comparison, Google TV has steadily worked on its live TV options, gaining deep integration with apps like Pluto TV and Philo, as well as the companys own YouTube TV. As was reported last year, Google TV is set to expand support for live TV by including its own set of channels.

According to text in the latest version of the Android TV launcher app, things will start out with an initial set of 50 channels.

Enjoy 50 channels of live TV without the need to subscribe, sign-up, or download

To be clear, these are distinct from other options available on Google TV today, as those integrations require you to download an app, while the new text says the channels are available without the need to [] download. More explicitly, the launcher refers to these as Google TV Channels.

So what should we expect to stream when Google TV gains its free, ad-supported live TV channels? Based on an in-app description, there should be a decent variety of news, sports, movies, and shows. Luckily, the app also includes a graphic that showcases over 30 of the soon-to-be-available channels.

Based on the list so far, it seems that Google has managed to land quite a few well-known channels and brands to pad out its free live TV options. While its still a long way from the 200+ channels of Samsung TV Plus, this is quite a strong start that should have something for everyone.

Thanks to JEB Decompiler, from which some APK Insight teardowns benefit.

FTC: We use income earning auto affiliate links. More.

Check out 9to5Google on YouTube for more news:

Read more from the original source:
Google TV preparing to add its own free live TV, heres the channel list - 9to5Google

Read More..

Concerning Mind After Midnight theory shows why you shouldnt stay up at night – ZME Science

Credit: Pixabay.

Its not just the outside world that is shrouded in darkness at night. Scientists are making the observation that our minds are more susceptible to negative thinking during the night than in the daytime, and this could have significant consequences for our mental health. In a new study, researchers have presented this effect under the ominous name of Mind After Midnight to raise awareness and call for more research into the physiological and psychological processes that start to take over our brains deep into the night.

Unlike rats and owls, humans are not nocturnal creatures. We evolved to be diurnal, or active during the day, and this is easy to prove by studying the circadian rhythm the 24-hour cycle that determines wakefulness and sleep which, in humans, is obviously geared toward sleeping in the dark. The brain can tell when its nighttime based on the amount of light over time it detects via the eyes.

When its dark, the brain floods the body with hormones that lower blood pressure, stress levels, body temperature, and other things that generally make us sleepy and prime us for slumber. On the flip side, the morning sunshine flips chemical switches that make us more alert and wakeful.

When this natural rhythm is disrupted, such as by staying up late at night, a host of deleterious consequences can occur, including sleep disorders. Over time, it can make it hard to fall asleep and leave you constantly fatigued throughout the day, as well as affecting memory, mood, physical health, and overall function.

But while most research has focused on examining what poor nightly sleep does to us the next day, not much attention has been given to what actually happens in those instances when were wide awake in the middle of the night.

The basic idea is that from a high level, global, evolutionary standpoint, your internal biological circadian clock is tuned towards processes that promote sleep, not wakefulness, after midnight, says Elizabeth Klerman, MD, PhD, an investigator in the Department of Neurology at Massachusetts General Hospital, Professor of Neurology at Harvard Medical School and the senior author of the paper.

There are millions of people who are awake in the middle of the night, and theres fairly good evidence that their brain is not functioning as well as it does during the day, she added. My plea is for more research to look at that, because their health and safety, as well as that of others, is affected.

Klerman and colleagues reviewed a number of studies and publicly available statistics showing how staying active after dark can affect our brain systems and, in turn, our behavior. The evidence theyve gathered thus far suggests that staying awake late at night makes us more biased towards negative emotions and more prone to taking risks that may endanger our physical integrity.

For instance, suicides are much more likely to occur during nighttime hours than during the day. Homicides and other violent crimes are most common at night, as is the use of illicit drugs, as well as unhealthy eating habits like snacking on carb-rich foods in the middle of the night.

It seems like a lot of unhealthy choices come out at night to haunt us. This observation has prompted Klerman and colleagues to propose a new hypothesis called the Mind After Midnight, which argues there may be a biological basis for all of these reported nighttime negative effects.

The idea is that things like attentional biases, negative affect, altered reward processing, and prefrontal disinhibition interact to promote behavioral dysregulation and even psychiatric disorders. The researchers cite studies that show how the circadian rhythm influences neural activity over the course of 24 hours, thereby affecting our moods and the way we interact with the world. For instance, research shows that positive affect, that is the tendency to view information in a positive light, is at its highest during the morning, whereas negative affect is highest at night.

Research also shows that the human brain produces more dopamine at night, an important neurotransmitter that plays a role in many important body functions, including movement, memory, and pleasurable reward and motivation. This inflow of dopamine can hijack the reward and motivation system in the brain, making us more prone to risky and impulsive behavior, whether its snacking on a huge bucket of ice cream at 12:00 AM or shooting heroin at night after resisting the cravings during the day.

Almost everyone has probably had to face the nighttime blues at least at some point in their lives, a weird dark hour when your worldview becomes narrower and more negative. The world is suddenly much smaller than it actually is and it just sucks. Klerman herself is no exception.

While part of my brain knew that eventually I would fall asleep, while I was lying there and watching the clock go tick tick tickI was beside myself, she recalls.

Then I thought, What if I was a drug addict? I would be out trying to get drugs right now. Later I realized that this may be relevant also if its suicide tendencies, or substance abuse or other impulse disorders, gambling, other addictive behaviors. How can I prove that?

For now, the Mind After Midnight is just an unvalidated hypothesis, but a concerning one that deserves further attention. Ironically, though, in order to investigate it, there would have to be some researchers who would need to be working after midnight to supervise test subjects. This may include, for instance, taking fMRI images of the brains of volunteers with disrupted sleep cycles.

Most researchers dont want to be paged in the middle of the night. Most research assistants and technicians dont want to be awake in the middle of the night, Klerman concedes.

But we have millions of people who have to be awake at night or are awake at night involuntarily. Some of us will have to be inconvenienced so we can better prepare them, treat them, or do whatever we can to help.

Follow this link:
Concerning Mind After Midnight theory shows why you shouldnt stay up at night - ZME Science

Read More..

‘I’ve Been To The Deepest Point Of The OceanHere’s What I Saw’ – Newsweek

I definitely inherited my wanderlust from my parents. When I was a kid, they would take us from South Florida to Washington state in the back of our wood-paneled station wagon. So, I've seen all of the continental United States. Along the way, they took us to Cape Canaveraland I have been a space enthusiast ever since.

It was always a dream of mine to go to space. I wanted to be an astronaut. But I went to college and there, I started my first companya travel business. We did big, group tours to the Caribbean. Then, I began traveling the world with that tour operation business.

While running my travel business I visited 75 or so countries and that's when I really began to explore, learning so much about the world and myself. This is when I transitioned from being a collector of passport stamps to being a "connector," meeting fascinating people all across the world. I ended up visiting my 193rd country in 2019, which is the total number of UN-recognized countries in the world.

In March 2022, I had the privilege of being able to go to space aboard the Blue Origin shuttle mission, which fulfilled my lifelong dream. I had been contacting Blue Origin for several years, trying to get on one of their space flights. I must have contacted them about 20 times and finally heard from them in December, 2021. They called and asked me if I'd like to be on the next flight and my knees literally buckled! Prior to the launch, we flew to Van Horn, Texas for four days of intense training, familiarizing the crew about the launch day sequence, safety procedures, and practicing getting in and out of a seat during zero gravity.

Going into space was incredibleit was an out-of-body experience. Being 66 miles above the Earth, I was riveted by the blackness of the universe.

Then, in July 2022, I went down to Challenger Deep, the deepest known point of Earth's seabed, located around seven miles down in the Mariana Trench, in the western Pacific Ocean. I had learned about the opportunity to go to the bottom of the ocean a few years ago, however with the COVID pandemic, I didn't feel like that was the right time for me to go. Also, going to space was my primary focus. The impetus for me going in 2022 was that the submarine was being sold and it was either go now or never. There was no debate. I have worked hard my entire life as an entrepreneur. It was worth every penny.

Challenger Deep consists of the eastern, central and western pools. Myself and my pilot, Tim McDonald went to the eastern part of the eastern pool, to a place that had not been explored before, reaching a depth of somewhere between 10,925 and 10,935 meters (35,843ft and 35,875ft). It was utterly amazing.

The goal for me personally was just to explore the very bottom of the ocean. I didn't do any scientific research ahead of the trip, however there were scientists on board the boat our submarine descended from, mapping the seafloor. And, we visited a location that, as far as we know, no human had yet traveled in the Mariana Trench.

The trench lies around 210 nautical miles to the southwest of Guam, and we headed out from Guam aboard the DSSV Pressure Drop. Just before the dive I was mostly confident, although in the back of my mind, of course, I was somewhat concerned about what could go wrong. As I had been before going to space, I thought about my friends and family, and reflected on how incredibly fortunate I've been to have had these experiences.

At around 8 a.m. on July 5, we got into a submarine and went down. It took about four hours to descend to the bottom and on the way down, I just had this intense anticipation of what we were going to see. You don't really know. There are maps of what the topography of the bottom of the ocean there looks like, but there have been several occasions where the maps don't resemble what is actually there. So we had no idea what we were going to see. The aim was to map areas, get high resolution video and put human eyes on unseen places.

When we got to the bottom, it was pretty clear from the beginning that we were in store for something because the sonar readings on the sub were spectacular. In fact, the eyes of my pilot lit up. I said, "What do you see?" And he responded, "I've never seen a reading quite like this before."

Just 10 minutes or so from where we landed were spectacular areas where you could see the Pacific tectonic plate actually going under the Philippine Plate. We were actually witnessing where the two plates are colliding and all of the resulting rubble from that process.

We also saw some incredible life. We actually collected a number of amphipods that are like little shrimpthey are fantastic. Think about it, they have no light, they're in almost freezing temperatures, there's no oxygen, there's the crushing pressure. But these creatures thrive down there.

In addition, we saw some sea cucumbers, which looked like transparent, floating blobs of mucus. They float around you and you're thinking, "What is that thing?" They look like alien lifeforms.

But for me, the most mind-boggling thing was seeing these bacterial mats. In the light from the submarine, they looked like pieces of gold across a two or three square meter area. But they are not photosyntheticthere's no light and barely any oxygen down there. It was like being on a Mars rover. If life exists on Jupiter's moons or other planets, my guess is that it's likely going to be like what we saw in the Mariana Trench. To be able to see that sea life first hand was amazing.

At seven miles below sea level, with billions of gallons of water overhead, the pressure was 16,000 pounds per square inch. So obviously, if something happened where the titanium sphere of the sub was breached it would be instantly catastrophic. But, the biggest danger is getting entangled and being stuck down on the bottom with only 96 hours of emergency oxygen.

We remained at the bottom of Challenger Deep for about two-and-half hours and I think the hairiest moment was getting to the bottom and the pilot saying, "What's that error message on the screen?" When the pilot needed to release some weights in order to become more buoyant and he flipped a switch but it didn't work, I thought, "Oh my gosh, are we going to get stuck." But luckily there was a backup, so he flipped another switch and it disengaged.

Overall, everything went as planned. And the reality is, the sub has been down to full ocean depth before, so I was pretty confident that it would withstand the pressure. But I was surprised that I didn't experience any unusual physical sensations inside the sub. It's a fully pressurized cabin so my ears didn't pop or anything like that.

1 of 4

I was one of less than 30 people to have made that trip. So few people have ever been down to see the bottom of the Mariana Trench because it's just so difficult to reach. More people have been to the moon, and that's quite a feat. It's pretty remarkable.

There are eight billion people on this planet. We have inhabited every square inch of land. We think we're so fabulous. Yet 70 percent of our Earth is ocean and so little of it has been mapped or explored. I am also a professor, and my message has always been to my students that anything is possibleto push through boundaries and keep their dreams alive. Hopefully, I can inspire them.

This experience was equivalent to going to space, so I would absolutely jump at the chance to go again. For me personally, to see the deepest point of the ocean was a dream come true.

Jim Kitchen is an adventurer and professor of entrepreneurship at the University of North Carolina Kenan-Flagler Business School. You can follow him on Instagram @jimkitchen or Twitter @jimkitchen

All views expressed in this article are the author's own.

Continued here:
'I've Been To The Deepest Point Of The OceanHere's What I Saw' - Newsweek

Read More..

Cloud Computing – GeeksforGeeks

In Simplest terms, cloud computing means storing and accessing the data and programs on remote servers that are hosted on internet instead of computers hard drive or local server. Cloud computing is also referred as Internet based computing.Cloud Computing Architecture:Cloud computing architecture refers to the components and sub components required for cloud computing. These component typically refer to:

Hosting a cloud:There are three layers in cloud computing.Companies use these layers based on the service they provide.

Three layers of Cloud Computing

At the bottom is the foundation, the Infrastructure where the people start and begin to build. This is the layer where the cloud hosting lives.

Now, lets have a look at hosting :Lets say you have a company and a website and the website has a lot of communications that are exchanged between members. You start with a few members talking with each other and then gradually the numbers of members increases.

As the time passes, as the number of members increases, there would be more traffic on the network and your server will get slow down. This would cause a problem.A few years ago, the websites are put in the server somewhere, in this way you have to run around or buy and set number of servers. It costs a lot of money and takes lot of time. You pay for these servers when you are using and as well as when you are not using.This is called hosting.

This problem is overcome by cloud hosting. With Cloud Computing, you have access to computing power when you needed. Now, your website is put in the cloud server as you put it on dedicated server.People start visiting your website and if you suddenly need more computing power, you would scale up according to the need.

Benefits of Cloud Hosting :

References:https://en.wikipedia.org/wiki/Cloud_computing

This article is contributed by Brahmani Sai. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.

Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.

More here:
Cloud Computing - GeeksforGeeks

Read More..

Cloud computing explained : PwC

Still with us? Dig into the details below:

Public cloud describes IaaS services like Alibaba Elastic Compute, Amazon AWS EC2, Digital Ocean Droplets, Microsoft Azure Virtual Machines and Google Compute Engine, to name a few. The providers all have multiple customers and deliver their services over the internet. Public cloud customers share the compute, storage and networking hardware with their cloud providers other customers. Its similar to the way web hosting works: Public cloud is like shared web hosting and private cloud correlates to dedicated web hosting. (In cloud, both types are managed.)

Private cloud is the cloud terminology for scenarios in which the hardware and software resources underlying the cloud services are used exclusively by one business or organization. In private cloud, the hardware may be on-premises or off-site. It may be generated by the enterprise itself or provided physically or offered over the internet by a cloud service provider. The key point is that the hardware and software required to generate a private cloud are dedicated to or owned by one business and not shared by other businesses. This provides an added level of security that may be required for sensitive data.

Hybrid cloud environments are those in which an organization uses two or more cloud types public, private or community clouds in a coordinated way, usually on a common goal. (A community cloud is a cloud resource shared by two or more organizations working together on the same concern, such as related governmental departments and agencies, industry standards working groups and joint corporate/academic efforts.) One of the chief benefits of hybrid cloud is a good deal more flexibility and agility to get things done efficiently and quickly.

Multi-cloud means nothing more than using two or more cloud services from different cloud service providers. Why would you do this as opposed to sticking with one main provider? After all, it adds complexity, including a more challenging security environment. But there are good reasons why some enterprises intentionally use two or more IaaS vendors. Not placing all your eggs in one basket makes some sense, but there are better benefits to multi-cloud.

Cloud computing providers offer unique services, capabilities and pricing that you might want to leverage, possibly for different segments of your business.Using two or more cloud providers is one way to combat vendor lock-in, too. Finally, multi-cloud is about having the flexibility to move an application to a different cloud or run it across multiple clouds to get a job done faster. The advantage is similar to the benefits of hybrid cloud, but not every organization needs to incorporate a private cloud.

At the same time, there are benefits to sticking with a single cloud service provider. For some companies, going wide and deep with one vendor may give you opportunities you wouldnt get otherwise. Bottom line?Each companys cloud migration strategy will be unique and highly dependent on their business goals, current and future tech roadmap, and industry, among other factors.

More:
Cloud computing explained : PwC

Read More..

Global MFTPaaS Market to Garner a CAGR of ~16% during 2022-2031; Growing Adoption of Cloud-Computing Technology and Need for Secure Data Transfer…

Research Nester Logo

Key Companies Covered in the Global MFTPaaS Market Research Report by Research Nester are International Business Machines Corporation, Teradata Corporation, Oracle Corporation, Axway Software, Citrix Systems, Kiteworks Inc., Wipro Limited, Saison Information Systems Co. Ltd., TIBCO Software Inc., Hewlett Packard Enterprise, and other key market players.

New York, Aug. 04, 2022 (GLOBE NEWSWIRE) -- As per recent studies, the overall size of smartphone users in 2022 is expected to topple approximately 6.750 billion worldwide; which is roughly 84.47 percent of the world's population. In total, around 7.30 billion people worldwide, i.e., 92.22 percent of the world's population own smart and feature phones. Additionally, the launch of smartphone applications is accelerating across the globe, which resulted in the release of over 91,000 apps in June 2022, and around 135,0000 new android apps in May 2019.

Research Nester has published a detailed market report on Global MFTPaaS Market for the forecast period, i.e., 2022 2031 which includes the ongoing industry innovations and recent trends being adopted by the major industry players to achieve their business targets. Apart from that, the inclusive data on market size, growth rate, market revenue share, growth opportunities and challenges for the market players along with worldwide analysis on five major regions North America, Latin America, Europe, Asia Pacific and Middle East & Africa has been provided in the report.

In the year 2021, roughly 4.11 people, which translates to over 65% of the global population, are expected to be online. Since 2019, the total number of internet users toppled above 18% which amounts to about 900 million individuals. The global MFTPaaS market is estimated to grow at a CAGR of ~16% during the forecast period. The growth of the market can be attributed to the increasing usage of internet and internet-based services and applications. MFTPaaS or managed file transfer platform-as-a-service offers a secure and managed end-to-end data transfer gateway. The market is estimated to expand as a result of both the increasing need for secure data transfer over the internet and the expanding usage of cloud computing technology. The expanding use of cloud computing and multi-cloud functionality is also anticipated to promote market expansion during the forecast period. For instance, in 2021, revenue from cloud computing exceeded USD 390 billion. MFTPaaS is synonymous with advances in cloud infrastructure to expedite file transfer and offset the challenges posed by legacy file transfer methods, thereby helping cloud economy improve with reduced maintenance and infrastructure costs.

Story continues

Get a Sample PDF Brochure: https://www.researchnester.com/sample-request-3957

Further, the development of the ICT sector has significantly boosted GDP growth, worker productivity, and other aspects of economies around the world. The production of goods and services by the ICT industry also promotes economic growth and development. The World Data Bank reports that between 2014 and 2020, the global exports of ICT products climbed from 11.4 percent to 14.3 percent of total exports of goods. Hence, the rising technological advancement and growing ICT sectors are estimated to propel the global MFTPaaS market growth outlook over the forecast period. Moreover, the rising investment in the research and development sector across the globe is anticipated to boost the market growth over the ensuing years. Research studies claim that since 2000, the amount spent globally on research & development has increased by more than three times in real terms, from roughly USD 680 billion to almost USD 2.4 trillion in 2019.

On the basis of geographical analysis, the global MFTPaaS market is segmented into five major regions including North America, Europe, Asia Pacific, Latin America, and Middle East & Africa region. Out of these, the market in Asia Pacific region is estimated to hold a significant share and grow at the highest CAGR during the forecast period. The growth of the market can be attributed to the increasing adoption of technology, including cloud computing and the internet of things (IoT), in developing nations. Additionally, it is anticipated that rising investments in technological advancement, especially cloud services, in nations mainly, China and India are anticipated to accelerate market expansion over the forthcoming years. For instance, along with the USA, China is responsible for over 72% of all blockchain-related patents, more than 49% of global IoT investment, and higher than 80% of the global market for public cloud computing services.

For more information in the analysis of this report, visit: https://www.researchnester.com/reports/mftpaas-market/3957

Moreover, the market in North America is estimated to gain the largest market share throughout the forecast period. In light of the increasing use of cutting-edge technologies and growing Bring Your Own Phone (BYOP) trend in workplaces in nations including Canada and the United States, the North American market is predicted to obtain the highest market share over the forecast period. It was observed that with more than 300 million smartphone users in the US as of 2021, this country has one of the largest smartphone markets in the world. Additionally, the presence of significant market competitors in the area and the accessibility of cutting-edge technology are predicted to drive market expansion during the forecast period.

The study further incorporates Y-O-Y growth, demand & supply and forecast future opportunity in North America (U.S., Canada), Europe (U.K., Germany, France, Italy, Spain, Hungary, Belgium, Netherlands & Luxembourg, NORDIC [Finland, Sweden, Norway, Denmark], Poland, Turkey, Russia, Rest of Europe), Latin America (Brazil, Mexico, Argentina, Rest of Latin America), Asia-Pacific (China, India, Japan, South Korea, Indonesia, Singapore, Malaysia, Australia, New Zealand, Rest of Asia-Pacific), Middle East and Africa (Israel, GCC [Saudi Arabia, UAE, Bahrain, Kuwait, Qatar, Oman], North Africa, South Africa, Rest of Middle East and Africa).

Get a Sample PDF of MFTPaaS Market Report@ https://www.researchnester.com/sample-request-3957

The global MFTPaaS market is segmented by installation into on-premise and cloud-based. Out of these, the cloud-based segment is estimated to hold the largest market share over the forecast period owing to the increasing usage of cloud computing and the reliance of many end-user sectors on cloud-based databases. Additionally, a lot of big businesses use cloud computing technologies to access and transfer huge amounts of data, which is predicted to accelerate market growth during the forecast period. Moreover, to increase their customer base in the worldwide market, major key players have adopted a variety of organic and inorganic techniques. They invest millions of dollars in product development and research to meet the demands of the MFTPaaS industry, which is predicted to boost the segment growth over the forecast period. For instance, the collaboration between Infosys and IBM to speed up digital transformation for businesses using IBM public cloud in March 2020, enabled organizations to adapt to changing business needs.

Further, the global MFTPaaS market is segmented by the end-user into BFSI, retail, manufacturing, energy & utility, IT & telecommunication, and others. Among these, the BFSI (banking, financial services, and insurance) segment is anticipated to experience significant expansion throughout the projected period as a result of the industry's extensive use of sensitive data, rising demand for the data's secure transport, and growing BFSI sector across the globe. As per the estimation India Investment Grid, the BFSI industry in India is estimated to be worth approximately USD 1 trillion and is expected to grow to become the third-largest by 2025.

For more insights on the market share of various regions: https://www.researchnester.com/sample-request-3957

The global MFTPaaS market is also segmented on the basis of type.

Global MFTPaaS Market, Segmentation by Type

Some of the prominent key players and their company profiling mentioned in the global MFTPaaS market research report include International Business Machines Corporation, Teradata Corporation, Oracle Corporation, Axway Software, Citrix Systems, Kiteworks Inc., Wipro Limited, Saison Information Systems Co. Ltd., TIBCO Software Inc., Hewlett Packard Enterprise, and other key market players. The profiling enfolds growth opportunities, challenges, and market trends prevalent for the growth of the market during the forecast period.

Do You Have Any Query Or Specific Requirement? Ask to Our Expert: https://www.researchnester.com/ask-the-analyst/rep-id-3957

Explore Our Recent Related Reports:

Hardware-in-the-Loop Market Segmentation by Component (Actuator, Sensors, and Others); by Application (Defense, Automotive, Electronics, Research & Education, and Others) Global Demand Analysis & Opportunity Outlook 2031Digital Transformation Consulting Services Market Segmentation by Type (Online Service, and Offline Service); by Application (BFSI, Transportation & Logistics, Oil & Gas, Healthcare, IT & Telecom, Manufacturing, Automotive, and Others); and by End-User (Large Enterprises, SMEs, and Others) Global Demand Analysis & Opportunity Outlook 2031LiDAR Sensors Market Segmentation by Type (Airborne, and Terrestrial); by Technology (Solid State, and Mechanical); and by Application (Vehicle Automation, Forest Planning and Management, Surveillance Technology, Transport Planning, Cellular Network Planning, and Others) Global Demand Analysis & Opportunity Outlook 2031AI-based Clinical Trial Solution Providers Market Segmentation by Application (Oncology, Cardiovascular Diseases, Metabolic Diseases, and Others); and by End-User (Pharmaceutical Companies, Academic Researcher, and Others) Global Demand Analysis & Opportunity Outlook 2031Hyperloop Technology in Transportation Market Segmentation by Type (Freight, and Passenger); by Component (Tubes, Pods, and Terminals); and by Speed (Less than 760 mph, and Above 760 mph) Global Demand Analysis & Opportunity Outlook 2031

About Research Nester

Research Nester is a one-stop service provider with a client base in more than 50 countries, leading in strategic market research and consulting with an unbiased and unparalleled approach towards helping global industrial players, conglomerates and executives for their future investment while avoiding forthcoming uncertainties. With an out-of-the-box mindset to produce statistical and analytical market research reports, we provide strategic consulting so that our clients can make wise business decisions with clarity while strategizing and planning for their forthcoming needs and succeed in achieving their future endeavors. We believe every business can expand to its new horizon, provided a right guidance at a right time is available through strategic minds.

Contact for more Info:AJ DanielEmail:info@researchnester.comU.S. Phone: +1 646 586 9123U.K. Phone: +44 203 608 5919

For More Update Follow: -LinkedIn|Twitter|Facebook|Xing

Read the original post:
Global MFTPaaS Market to Garner a CAGR of ~16% during 2022-2031; Growing Adoption of Cloud-Computing Technology and Need for Secure Data Transfer...

Read More..

Here’s how cloud computing enables the transformation of the MedTech sector – ETCIO

The COVID-19 pandemic exposed and heightened Indias existing healthcare burden, triggering the need for action to build a more resilient health system. This necessitated an evolution of the healthcare model to contend with the challenges at hand, including a growing doctor-patient gap and lack of infrastructure limiting healthcare accessibility in remote regions of the country. In response, health and human service providers have embraced the digital transformation of healthcare, leveraging technology and MedTech innovation to deliver effective and scalable solutions with ease nationwide. In fact, data by EY suggested that 51% of surveyed health and human services organizations across public and private spheres in India increased their use of digital technologies during the pandemic.

To expedite Indias MedTech sectors growth trajectory and strengthen digital infrastructure for quality healthcare delivery, leveraging data effectively is paramount. How do we effectively store valuable health information, analyze it in the context of broader patient history, and deploy these insights to improve patient outcomes? One solution is an emerging and potentially disruptive technology - cloud computing, which has transformed the healthcare landscape by spurring connectivity, greater speed, data storage, security, and scalability.

Scaling Precise Insight-led Healthcare with Cloud Computing

Healthcare remains a dynamic field, and the role of cloud can be pivotal to processing large amounts of healthcare data in a timely manner to generate vital patient insights. This enables clinicians to have easy access to data, including patient history and best practice case studies, for research and clinical application at their fingertips. This leads to more informed decision-making, spurring faster diagnosis and more precise healthcare. For instance, insights leveraged from data, such as with AI tools, can guide personalized treatments, which are especially relevant for therapy areas like oncology, as every persons cancer and care trajectory is unique.

Simplified data sharing is key to fostering collaboration and meaningful doctor-patient interactions in the healthcare sector. Cloud adoption facilitates this and enables the transfer of simplified data from wearable technologies, such as glucose monitoring devices, from the patients device to the doctor, which can be analyzed to generate actionable insights.

There is an evident need for the quick transfer of simplified data through such cloud platforms to advance digital healthcare, such as by supporting remote diagnosis or virtual assistance in complex procedures. For instance, practitioners can easily collaborate through video streaming from Cath labs and operating theaters in real time, which supports quality care delivery. This can also be a boon to imaging, with more experienced technologists able to access scans remotely and guide local practitioners on treatment pathways for the patient.

Picking the Pace with Data-Driven Insights

Cloud computing also supports clinicians with providing faster, error-free and patient centric diagnoses. With its ability to facilitate high speed data transfers and elasticity in compute capability, such platforms are minimizing medical errors and the time taken to diagnose and deliver care. For instance, it previously took a typical computer up to six hours to process information from a CT scanner and provide a holistic image of ones brain, for a patient who arrived at a hospital with stroke symptoms. However, the golden hour or window for treatment is roughly four hours.

To reduce the time frame to getting imaging insights, the sector has shifted to adopting cloud technologies to support its medical devices. Such platforms are thus able to enable clinicians to turn data into impactful insights, and research into life-saving treatments.

Over time, cloud platforms are also being geared to provide additional services, including device protocol management and care pathway analytics, which can advance the role of AI in clinical care. These platforms are fundamental to increase everyday operational efficiency - by enhancing technological capabilities, reducing cost, time, and labor, thus easing the burden on health systems including hospital networks. This spurs intelligent processes, making systems more agile and responsive in moments of need. More so, better interoperability between systems could save healthcare ecosystems $30 billion per year. It also poses a scalable solution to effectively support care delivery in smaller clinics across remote or rural areas, as well as across multi-facility networks.

Cloud-connected systems elevate the delivery of high-quality care. To ensure best practices are maintained, this must be coupled with rigorous data compliance and security standards to ensure patient data security and privacy underscore the digital transformation of healthcare. Todays dynamic healthcare environment needs integrated, innovative solutions like cloud platforms now, more than ever.

The author is Girish Raghavan, Vice President - Engineering, GE Healthcare

More here:
Here's how cloud computing enables the transformation of the MedTech sector - ETCIO

Read More..

Why AI and machine learning are drifting away from the cloud – Protocol

For them, it was immediately obvious what the benefits were and how you only pay for the five microseconds that this code runs, and any idle is not being charged to you, Vogels said. And you don't have to worry about reliability and security and multi-[availability zone] and all these things that then go out of the window. That was really an eye-opener for me this idea that we sometimes have in our head that sort of the young businesses are more technologically advanced and moving faster. Clearly in the area of serverless, that was not the case.

AWS Lambda launched into general availability in 2015, and more than a million customers are using it today, according to AWS.

Vogels gave Protocol a rundown on AWS Lambda and serverless computing, which allows customers to build and run applications and services without provisioning or managing servers. He also talked about Amazon CodeWhisperer, AWS new machine learning-powered coding tool, launched in preview in June; how artificial intelligence and ML are changing developers lives; and his thoughts on AWS providing customers with primitives versus higher-level managed services.

This interview has been edited and condensed for clarity.

So what's the state of the state on AWS Lambda and how it's helping customers, and are there any new features that we can expect?

You'll see a whole range of different migrations happening. We've had folks from Capital One that migrated old mainframe codes to Lambda. [IRobot, which Amazon announced plans to acquire on Friday], the folks that make Roomba, the automatic [vacuum] cleaner, have their complete back end running as serverless because, for example, that's a service that their customers don't pay for, and as such, they really wanted to minimize their costs yet provide a good service. There's a whole range of different projects happening and whether that is pre-processing images at some telescope deep in Chile, all the way up to monitoring Snowcones running in the International Space Station, where they were in Lambda on that device as well and actually can do processing of imagery and things like that. It's become quite pervasive in that sense.

Now, the one thing is, of course, if you have existing code, and you want to move over to the cloud moving over to a virtual machine is easy it's all in the same environment that you had on-premises. If you want to decompose the application that you had, don't want to do too many code changes, probably containers are a better target for that.

But for quite a few of our customers that really want to start from scratch, but sort of really innovate and really think about [what] event-driven architectures look like, serverless becomes quickly the sudden default target for them. Mostly also because it's not only that we see significant reduction in cost for our customers, but also a significant reduction in their carbon footprints, because we're able to do much better packing on energy than customers would be able to do by themselves. We now also run serverless on our Graviton processors, so you'll see easily a 40% reduction in cost in energy usage.

For me, serverless means that our customers don't have to think about security, reliability, managing performance, managing scale, doing failover all those kinds of things and really controlling costs.

But always I'm a bit ambivalent about the word serverless, mostly because many people associate that with when we launched Lambda. But in essence, the first service that we launched, S3, also is really serverless. For me, serverless means that our customers don't have to think about security, reliability, managing performance, managing scale, doing failover all those kinds of things and really controlling costs. And so, in essence, almost all services at AWS are serverless by nature. If you think about DynamoDB [a serverless NoSQL database], or if you think about Neptune [a graph database service] or any of the other services that we have, most of them are serverless because you don't have to think about sort of provisioning them, managing them. That's all done for you.

Can you talk about the value of CodeWhisperer and what you think is the next big thing for or the future of low-code/no-code?

For me, CodeWhisperer is more an assistant to a developer. There's a number of application areas where I think machine learning really shines and it is sort of augmenting professionals by helping them, taking away mundane tasks. And we already did that, of course, in AWS. If you think about development, there's CodeGuru and DevOps Guru, which are both already machine-learning services to help customers with, on one hand, operations, and the other one sort of doing the early security checks during the development process.

CodeWhisperer even takes that a step further, where if you look how our developers develop, there's quite a few mundane tasks where you will go search on the web for a piece of code how do we do [single sign-on] login into X, Y or Z? Most people will just cut and paste or do a little translation. If that was in Python and you need to actually write it in TypeScript, we may do a translation on that.

There's a lot of work, actually, that developers do in that particular area. So we thought that we could really help our customers there by using machine learning to look at the complete base of, on one hand, the AWS code, the Amazon code and all the open-source code that is out there, and then do a qualitative test on that, and then include it into this body of work where we can easily help customers by just writing some plain text, and then saying, I want a [single sign-on] log-on here, and then the code automatically appears. And with that, we can do checks for security, we can do checks for bias. There's lots of other things that are now possible because we're basically assisting the developer in being more efficient and actually writing the code that they really want to write.

When we launched Lambda, I said the only code that will be written in the future is business logic. Well, it turns out we're still not completely there, but tools like CodeWhisperer definitely help us to get on that path because you can focus on what's the unique code that you need to write for the application that you have, instead of the same code that everybody else needs to write.

People really like it. It's also something that we continuously improve. This is not a standing-still product. As we look at more code, as we get more feedback, the service improves.

If I think about software developers, it's one of the few jobs in the world where you can be truly creative and can go to work and create something new every morning. However, there's quite a bit of heavy lifting still around that [that] sort of has nothing to do with your creativity or your ability to solve problems. With CodeWhisperer, we really tried to take the heavy lifting away so that people can focus on the creativity part of the development job, and I think anything we can do there, developers like.

In your tech predictions for 2022, you said this is the year when artificial intelligence and machine learning take on the undifferentiated heavy lifting in the lives of developers. Can you just expand on that, and how AWS is helping that?

When you think about CodeWhisperer and CodeGuru and DevOps Guru or Copilot from GitHub this is just the beginning of seeing the application area of machine learning to augment humans. Whether there is a radiologist somewhere that is late at night looking at imagery and gets help from machine learning to compare these images or whether it's a developer, we're really at the cusp of how machine learning will accelerate the way that we can build digital systems.

I was in Germany not that long ago, and there the government told me that they have 80,000 open IT positions. With all the scarceness in the world of labor, anything which we can do to make the life of developers easier so that they're more productive, that it makes it easier for people that do not have a four-year computer science degree to actually get started in the IT world, anything we can do there will benefit all the enterprises in the world.

What's another developer problem that you're trying to solve, or what are developers asking AWS for?

If you're an organization like AWS or Amazon or quite a few other organizations around the world, you make use of the DevOps principle, where basically your developers also have operational tasks. If you do operations, there's information that is coming from 10 or 20 different sides. There's log files, there's metrics, there's dashboards and actually tying that information together and analyzing the massive amounts of log files that are being produced by systems in real time, surfacing that to the operators, showing that there may be potential problems here and then give context around it because normally these log files are pretty cryptic. So what we do with DevOps Guru, for example, is provide context around it such that the operators can immediately start taking action, looking for what [the] root cause of particular problems are. So we're looking at all of the different aspects of development and operations to see what are the kind of things that we can build to help customers there.

At AWS re:Invent last year, you put up a slide that read primitives, not frameworks, and you said AWS gives customers primitives or simple machines, not frameworks. Meanwhile, Google Cloud and Microsoft are offering these sort of larger, chunkier blocks such as managed services where customers don't have to do the heavy lifting, and AWS also seems to be selling more of them as well.

Let me clarify that. It mostly has to do also with sort of the speed of innovation of AWS.

Last year, we launched more than 3,000 features and services. And so why are we still looking at these fine-ingrained building blocks? Let me go back to the beginning of AWS when we started then, how software companies at that moment were providing infrastructure or platforms was basically that they would give developers everything [but] the kitchen sink on Day One. And they would tell you, "This is how you shall develop software on this platform." Given that these platforms took quite a while to develop, basically what you operate is a platform that is already five years old, that is looking at five years back.

Werner Vogels gives his keynote at AWS re:Invent 2021. Photo: Amazon Web Services, Inc.

We knew that if cloud would really be effective, development would change radically. Development would indeed be able to scale quicker and make use of multiple availability zones and many different types of databases and things like that. So we needed to make sure that we were not building things from the past, but that we were building for how our customers would want to build in 2025. To do that, you don't give them everything and tell them what to do. You give them small building blocks, and that's what I mean by primitives. And all these small building blocks together make a very rich ecosystem for developers to choose from.

Now, quite a few, especially the more tech-savvy companies, are more than happy to put these building blocks together themselves. For example, if you want to build a data lake, we have to use Glue [a serverless data integration service], we have to use S3, maybe some Redshift, Kinesis for ingestion, Athena for ad hoc analytics. I think there's quite a few customers that are building these things by themselves.

But then there's a whole category of customers that just want a data lake. They don't want to think about Glue and S3 and Kinesis, so we give them a service or solution called Lake Formation. That automatically grabs all these things together and gives them this higher-level component.

Now the fact that we are delivering these higher-level solutions, for example, some customers just want a backup solution, and they don't want to think about how to move things into S3 and then do some intelligent tiering [so] that if this data isn't accessed in two weeks, then it is being moved into cold storage. They don't want to think about that. They just want a backup solution. And so for that, we provide them some backup. So we do have these higher-level services. It's more managed-style services for you, but they're all still based on the primitives that sit underneath there. So whether you want to start with Lake Formation and later on maybe start tweaking things under the covers, that's still possible for you. While we are providing these higher-level components, where customers need to have less worry about which components can fit together, we still provide the underlying components to the developers as well.

Is quantum computing something that enterprise CTOs should be keeping their eye on? Do you expect there to be an enterprise use for it, or will it be a domain just for researchers, or is it just too far out to surmise?

There is a back-and-forth there. If I look at some of the newer developments, it's clearly research oriented. The reason for us to provide Braket, which is our quantum compute service, is that customers generally start experimenting with the different types of hardware that are out there. And there's typical usage there. It's life sciences, it's oil and gas. All of these companies are already investigating whether they could see significant speed-ups if they would transform their algorithms into things that could run on a quantum machine.

Now, there's a major difference between, let's say, traditional development and quantum development. The tools, the compilers, the software principles, the books, the documentation for traditional development that's huge, you need great support.

In quantum, I think what we'll see in the coming four or five years, as I listen to the Amazon researchers working on this, [is that] much of the work will not only go into hardware, but also how to provide better software support around it, such that development for these types of machines becomes easier or even goes at the same level as traditional machines. But one of the things that I think is very, very clear is that we're not going to be able to solve new problems necessarily with quantum computing; we're just going to be able to solve old problems much, much faster. That's why the life sciences companies and health care and companies that are very interested in the high-performance compute are experimenting with quantum because that could accelerate their algorithms, maybe by orders of magnitude. But, we still have to see the results of that. So I'm keeping a very close eye on it, because I think there may be very interesting workloads and application areas in the future.

Go here to read the rest:
Why AI and machine learning are drifting away from the cloud - Protocol

Read More..