Page 1,489«..1020..1,4881,4891,4901,491..1,5001,510..»

Scott Reeder: Will artificial intelligence replace writers? – Quad-City Times

I refuse to use self-checkout lanes at the grocery store.

This isnt a new thing. I just dont like the technology. Its not that I find it too complicated. My objection is that when I use it, Im contributing to someone losing their job.

Low skill, repetitive jobs like those of checkout clerks are on the chopping blocks as technology marches onward. Perhaps Im overly nostalgic. But I like chatting with the checkout clerks as Im buying my milk and vegetables.

A machine has never given me a smile or suggested a less expensive brand of flour. But some well-intentioned folks working the checkout lane have done just that.

Truck drivers may soon face a similar threat as artificial intelligence is perfected and self-driving vehicles become commonplace. I was reading a Wall Street Journal article that said the future of truck stops is imperiled.

People are also reading

They may just become automatic fueling stations with no need to serve coffee, beef jerky and donuts. Showers and restrooms will become a thing of the past. Robots, after all, dont need such things.

A whole industry scattered across rural America may atrophy away.

When I read predictions like this, its easy to become smug and self-congratulatory. I can applaud myself for pursuing college degrees and choosing a profession that requires thinking and creativity.

Writing, after all, isnt something that can be outsourced to a machine. Or can it?

This past week, Ive been experimenting with artificial intelligence chatbots. I grew up in Galesburg, a railroad town that was the home of the poet Carl Sandburg. So just for fun, I asked a chatbot to write a poem about hometowns and railroads in Carl Sandburgs style.

In less than three seconds, it composed this poem:

I hear the train that comes from far

The one that brings me back to home

I see the smoke that trails the sky

The one that marks the way I roam

I feel the rumble and the roar

The one that shakes me to the bone

I smell the iron and the steel

The one that makes me not alone

I taste the dust and sweat and tears

The one that tells me what I've done

I touch the ticket and the rail

The one that shows me where I'm from

If I were a high school English teacher, Id give a student who turned that in an A. And frankly I couldnt have come up with a title for the poem in the time it took my cell phone to compose it.

Thats intimidating. Last month, I spent hours writing a sermon. (I occasionally volunteer as a lay preacher.) Just curious about the limits of artificial technology, two weeks later I asked my chatbot to compose a sermon on the same topic in the Rev. Billy Grahams style.

Three seconds later, a full sermon appeared on my computer screen. It had the rhythm and meter of something Rev. Graham might have written. It stayed true to his Evangelical theology and emphasized Bible passages that would have been dear to one of historys most successful evangelists.

I sat there quietly intimidated. Could the work I do someday be outsourced to a machine? More importantly, will human beings lose their ability to compose literature on their own?

I foresee a lot of high school and college students turning to artificial intelligence rather than their own to write poems and compositions. Its unlikely a teacher grading papers would know the difference.

With such a convenient crutch, will youngsters give up on the agonizing trial and error necessary to learn to write?

When I asked the chatbot to write a news story on a topic I had written about the previous week, the story it wrote was a disaster. I breathed a sigh of relief.

Well, a machine can only work with the body of information that is available to it. It can scour the internet for answers. But it cant pick up a phone and drag answers out of a politician reluctant to give them or interview a sobbing crime victim who needs the prompting of a reassuring voice to tell her story.

A friend who is a Canadian journalist puts it this way: Artificial intelligence plagiarizes it doesnt generate new information.

Artificial intelligence lacks basic desires such as empathy, love and justice. It can only mimic those human attributes and not particularly well for now.

Scott Reeder, a staff writer for the Illinois Times can be reached at sreeder@illinoistimes.com.

Get opinion pieces, letters and editorials sent directly to your inbox weekly!

Read more from the original source:
Scott Reeder: Will artificial intelligence replace writers? - Quad-City Times

Read More..

The 10 Best TV Shows Of All Time, According To Artificial Intelligence – Looper

While "Seinfeld" broke new ground as a sitcom about a group of quirky pals in New York City, "Friends" took that basic idea and ran with it. With a slightly younger cast of twenty-something singles, it focuses more on the relationships between Ross, Chandler, Phoebe, Rachel, Joey, and Monica, and with a bigger emphasis on their romantic foibles. More heartfelt and even downright sentimental at times, "Friends" eschewes the cynicism of its GenX predecessor in favor of its fun-loving, lighthearted, and free-wheeling Millennial comedy, helping it capture the hearts of a younger generation in the late 1990s and into the 2000s.

An ensemble of mostly newcomers, it launchedthe careers of David Schwimmer, Matthew Perry, Jennifer Aniston, Lisa Kudrow, and Matt LeBlanc. "Friends" became immensely popular in large part thanks to its will-they-won't-they romance between Ross and Rachel, while the sarcasm of Chandler became its own common lexicon. Nearly unmatched in its popularity throughout its run, it was a ratings juggernaut and became such a success that the stars of the show were being paid a million dollars per episode. Even now, nearly two decades after it came to a close, "Friends" feels almost as popular as it was in its prime, with new audiences discovering it thanks to the magic of streaming.

But with all that has been said about "Friends," there's not much for artificial intelligence to add. But it also mentions its ageless humor and stories ofheartache, friendship, career struggles, and family problems, which pretty much everyone can relate to, no matter what generation you are.

The rest is here:
The 10 Best TV Shows Of All Time, According To Artificial Intelligence - Looper

Read More..

Artificial Intelligence in the Spotlight – NBC Bay Area

L.L. Bean has just added a third shift at its factory in Brunswick, Maine, in an attempt to keep up with demand for its iconic boot.

Orders have quadrupled in the past few years as the boots have become more popular among a younger, more urban crowd.

The company says it saw the trend coming and tried to prepare, but orders outpaced projections. They expect to sell 450,000 pairs of boots in 2014.

People hoping to have the boots in time for Christmas are likely going to be disappointed. The bootsare back ordered through February and even March.

"I've been told it's a good problem to have but I"m disappointed that customers not gettingwhat they want as quickly as they want," said Senior Manufacturing Manager Royce Haines.

Customers like, Mary Clifford, tried to order boots on line, but they were back ordered until January.

"I was very surprised this is what they are known for and at Christmas time you can't get them when you need them," said Clifford.

People who do have boots are trying to capitalize on the shortage and are selling them on Ebay at a much higher cost.

L.L. Bean says it has hired dozens of new boot makers, but it takes up to six months to train someone to make a boot.

The company has also spent a million dollars on new equipment to try and keep pace with demand.

Some customers are having luck at the retail stores. They have a separate inventory, and while sizes are limited, those stores have boots on the shelves.

Originally posted here:
Artificial Intelligence in the Spotlight - NBC Bay Area

Read More..

Shift to on-premises, hybrid cloud models helping Pure Storage business – The Economic Times

Global flash storage solutions provider Pure Storage is gaining business from hyperscalers such as Amazon Web Services, Azure and Google Cloud as customers realise the recurring costs from availing these services are more than running an on-premises model, where firms maintain cloud servers in their own site, or opting for a hybrid model, said its chairman Charles Giancarlo.A part of their (hyperscalers) slowdown has to do with the overall economy. But part of it is businesses realising that while hyperscalers are inexpensive during the initial development phase, they are very expensive during the production phase or when (customers) applications get used at scale, Giancarlo, told ET during his visit to Bengaluru last week. So that repatriation is happening as they are realising that some of it (cloud storage) they are better off running themselves (on-premises) or using a hybrid environment makes more sense economically.

The data storage can happen in three ways private or on-premises model, public model, where the customer opts the services of hyperscalers and hybrid model, which is a mix of on-premises cloud and public cloud services.

The NYSE-listed firm also said it is open to working with partners such as Flextronics, Celestica and Foxconn if they open a manufacturing unit in India but the company will not set up a unit themselves as it relies completely on third parties. The company has a research and development (R&D) centre in Bengaluru opened last year and it employs 200 engineers currently.

Read this article:
Shift to on-premises, hybrid cloud models helping Pure Storage business - The Economic Times

Read More..

RIP to Dropcams, Nest Secure: Google is shutting down servers next year – Ars Technica

Enlarge / The Dropcam line was eventually replaced by Nest Cam.

Dropcam

In a post on the official Google Nest Community page, Google announced it is shutting down the service for several old Nest smart home products. Most of these have not been for sale for years, but since this is all hardware tied to the cloud, turning off the servers will turn them into useless bricks. The good news is that Google is giving existing users deals on hardware upgrades to something that is supported.

View more storiesFirst up is Dropcam, which Nest and Google acquired in 2014 for $555 million and eventually turned into the Nest Cam line. Dropcam (and Dropcam Pro) server support is getting shut off on April 8, 2024, and Google says, "Dropcam will no longer work after that date, and you will no longer be able to use your Nest app to check status." The video clips are stored online, so Google adds, "If you wish to keep your video history, please download and save before this date."

Nest replaced the Dropcam line in 2015, so these cameras are all around 8 years old. Nest promises five years of support for its own products. Google isn't just cutting these users off, though; it's offering discounts on new Nest Cams if they want to keep rolling with the Google ecosystem. Google says if users are currently subscribed to Nest Aware, they'll get a free indoor, wired Nest Cam (a $100 value). Nest Aware is a $6 or $9 monthly subscription that lets you record video from the camera and store it online. Since that subscription fee will match the price of a Nest Cam in a year or two, it makes sense for Google to try to keep that subscription revenue flowing. If you don't have a Nest Aware subscription, Google is offering a 50 percent discount on the wired, indoor Nest Cam.

(Though I would encourage you to throw off the shackles of Google's always-turbulent walled garden and buy something that doesn't have a monthly fee or rely on the cloud. I like my Unifi Protect system for being self-hosted with decent hardware and a range of camera models, but there are many options out there. Nest Cams just do not offer anything that justifies the monthly fee, and it gives them a high total cost.)

This is what a Nest Secure looks like. That's a hub/keypad up top, with a keychain presence sensor and two pieces of a "Nest Detect" sensor.

Nest Detect is a combo motion sensor and door monitor.

Google

You can press a button on the Nest Detect to use a door without triggering the alarm.

Google

The Nest Tag would let you tap to disarm.

Next up on the Nest chopping block is Nest Secure. This was a $500 home security system with a keyboard, window and door sensors, motion detectors, and a keychain presence sensor. Google killed the hardware in 2020 but will keep supporting existing devices until the same day as Dropcam: April 8, 2024. Google says on that date "your Nest Secure will no longer work. It will not be accessible in the Nest app and wont connect to the internet."

When Google initially announced the Nest Secure's cancellation, it promised to support the device until at least November 2022exactly five years after the November 2017 releasebut now it's getting 6.5 years of support.

Nest Secure owners are offered a free upgrade to the new ADT systemGoogle calls this an "up to $485 value"though you'll have to do a lot of new installation work, swapping out every sensor and component to get it up and running. Another option is a $200 credit on the Google Store. If you qualify for discounts for the Nest Secure or Dropcams, Google says they'll email you. There's also a recycling program for your dead products.

The Nest smart home ecosystem, "Works with Nest" also finally got a shutdown date: September 29, 2023. "Works with Nest" was Nest's original smart home ecosystem, allowing for things like your thermostat changing when you leave the house or allowing third-party apps to control your Nest system. Third-party devices could also plug into this system and somehow interface with your thermostat, cameras, or smoke detector.

Works with Nest got a death sentence in 2019, and has been sitting on Google death row ever since. Google originally wanted to shut down Works with Nest in August 2019, but delayed the termination after a public outcry. Google still blocked Works with Nest from adding new devices in August 2019 though, so any system has been limping along since then. If something broke, you were out of luck and couldn't replace it.

At the time, Google wanted Nest users to switch to the "Works with Google Assistant" ecosystem, which is the same basic idea of smart home communication, but without the "not invented here" baggage of the acquired Nest system. It uses a Google account instead of a Nest account, has different hardware compatibility, and, critically, it let you control devices by voice. Of course, the Google Assistant also seems to be deprioritized at Google, so Works with Google Assistant isn't called Works with Google Assistant anymore; it's now called "Works with Google Home." But "Google Home" doesn't refer to the original Google Home product, which was a smart speaker. That line was killed off and replaced with the Nest Audio speakers. "Google Home" now means the app that controls your smart devices, so "Works with Google Home" means you'll see it in the app. The Nest app, which can also control some Nest devices, is being phased out in favor of the Google Home app.

View original post here:
RIP to Dropcams, Nest Secure: Google is shutting down servers next year - Ars Technica

Read More..

Healing With Psychedelics, Virtual Reality & Artificial Intelligence – Microdose Psychedelic Insights

Set refers to the subject; setting is the sessions environment. Matrix is the environment from which the subject comes: the environment surrounding the subject before and after the session, and the larger environment to which the subject returns. Betty Eisner, 1997, Pioneer, LSD Research

The use of psychedelics in therapy and personal growth continues to grow, thanks in part to the growing body of research showing their potential for healing depression, anxiety, addiction, and other mental health issues. At the same time, virtual reality (VR) has become an increasingly popular tool for creating immersive and powerful therapeutic experiences, whether for treating phobias, addiction, or trauma. These fields offer a new frontier for innovation and therapeutic healing.

This article will explore how psychedelics and VR are being combined with AI to create powerful therapeutic experiences. So, lets look at what we know about psychedelics, VR, and AI separately.

Psychedelics are natural or man-made compounds that produce profound changes in consciousness, including non-ordinary states of perception and intense emotional experiences. The physiological effects are usually mild and short-lived, but the psychological impacts can be significant.

VR is a computer-generated simulation of a three-dimensional environment that users can interact with using specialized hardware or equipment. It can be used to create immersive, interactive experiences that allow users to explore and manipulate a virtual environment.

Artificial Intelligence (AI) is a technology that enables machines to think like humans and act intelligently. AI systems can recognize patterns, learn from experience, and make decisions based on their analysis. This technology has been used in various industries, including healthcare, finance, automotive, and more. AI is rapidly changing how we interact with the world around us by providing more innovative solutions and increased efficiency for everyday tasks.

When these three technologies are combined, the potential for therapeutic applications is profound. By manipulating a users senses in a virtual world, therapists may be able to create powerful therapeutic experiences that could not be achieved through traditional methods.

The research is building to assess the synergies between VR with LSD, psilocybin, and MDMA to treat certain psychiatric disorders effectively. In this literature review, we look at the efficacy of VR, AI, and psychedelics in treating various mental health conditions, including substance use disorder, post-traumatic stress disorder (PTSD), depression, anxiety, obsessive-compulsive disorder (OCD), eating disorders, bipolar disorder, and schizophrenia. Research results show evidence suggesting that VR and psychedelic therapies may effectively treat certain psychiatric diseases.

Alone, psychedelic psychotherapy has been well-researched and used to treat depression, anxiety, and post-traumatic stress disorder (PTSD). With AI, we further have the opportunity to revolutionize research into psychedelic drugs and unlock their potential as medical treatments. With its ability to quickly process large amounts of data and uncover patterns in complex systems, AI can further help scientists make progress in understanding how psychedelics affect our brains and bodies paving the way for more effective treatments in the future. AI can also be used to develop personalized treatment plans for individual patients based on their unique needs. AI allows these companies to search more comprehensively for novel psychedelics with desired therapeutic applications.

HMNC Brain Health uses its AI Platform to develop groundbreaking therapies, combining Psychiatry, Genomics, and Analytics. AI is also used to gain precision when assessing mental health, bio-markers, and DNA. AI enables researchers to conduct more comprehensive searches of chemical databases and uncover molecules that may have otherwise gone undiscovered. AI-assisted searches also provide greater insight into molecular structure and predict likely outcomes for potential medicines, which could lead to the earlier discovery of potentially effective treatments.

VR can simulate real-world scenarios that evoke strong emotional reactions from patients, allowing them to confront their fears in a safe environment. This immersive environment could help individuals process traumatic memories or gain insight into their behavior more quickly than traditional talk therapy alone. The need for more research and an evidence-based strategy to design and roll out Virtual Reality (VR) applications in psychedelic-assisted psychotherapy has been identified. VR research has far-reaching implications in various fields, such as psychology, pharmacology, and medicine. By gaining a better understanding of the inner workings of the brain, researchers have looked at VR. They have found the experience and benefits similar to LSD or other Psychedelics. Using VR as a research tool has allowed a deeper understanding of the effects of psychedelics, such as an altered perception of time and space, improved creativity, enhanced mood regulation, and deeper self-reflection. The same technology can be used to study how people respond to various kinds of stimuli in the environment.

One way to use VR is by allowing patients to create and build their setting from the inside out in the preparation process for the psychedelic session. Dr. Prash Puspanathan, Enosis Therapeutics

Enosis Therapeutics has advanced the research of VR technology used with psychedelic therapy to create immersive patient experiences. For example, as a state-altering method, VR can bridge normal and altered consciousness to ease the transition process and reduce some patients anxiety before a psychedelic experience. This allows a better capacity to surrender to the experience and go into it more easily, giving patients a chance to dive deeper. After all, this is your personal matrix.

Enosis Therapeutics modules enable individuals to experience the full range of a psychedelic journey. They recently conducted a groundbreaking study to determine the effectiveness of virtual reality (VR) in psychedelic therapy. The results were encouraging, as participants reported increased satisfaction and therapeutic efficacy from VR scenarios. This provides strong evidence for using VR technology in therapeutic protocols, demonstrating that incorporating such immersive experiences can positively impact the outcome of psychedelic therapy sessions. With this information, Enosis is now focused on leveraging properties unique to immersive environments, such as the capacity to buffer from unwanted stimuli or reliably and quickly induce a mindful presence to ensure more successful patient outcomes.

The potential benefits of combining psychedelic psychotherapy with AI and VR are vast. By leveraging these technologies, clinicians may be able to provide more effective treatments for mental health issues while reducing the risk of adverse effects associated with psychedelics. As research explores the synergies between these technologies and psychedelic psychotherapy, we may soon see even more significant advances in this field.

Entheo Digital develops Breath, Light, And Sound Therapy (BLAST) biofeedback systems that amplify psychedelic-assisted journeys, with or without medicine. Their systems provide immersive environments that modulate three aspects of experience: respiration, visual field, and auditory/tactile sensation. The system uses vocal toning, or long vowel tones, to generate an audio-visual trance experience that activates the parasympathetic nervous system. This helps to reduce stress, increase oxygen flow throughout the body, and cultivate positive states of mind. The visual field is modulated with specialized LED light patterns designed to activate specific brain waves like alpha, delta, and theta, which are associated with improved attention, cognitive flow, and focus. Finally, the musical experience is entirely generated from the users voice, which provides a personalized journey that results in feelings of greater self-efficacy. This system can help people rapidly learn mindfulness skills and better understand their psychological and emotional state by utilizing neuroscience techniques to understand how the brain works.

Its not like the end of the worldjust the world as you think you know it. Rita Dove

Neuroscience helps us better understand how psychedelics affect our mental states. For example, fMRI scans can show changes in brain activity after a psychedelic experience, which can help researchers pinpoint potential therapeutic benefits and uncover how psychedelic-assisted psychotherapy Neuroscience studies have already shown that certain psychedelic compounds can alter activity and communication in the brain, leading to changes in perception and cognition. AI has already been used to detect patterns in neural networks and can be utilized to identify new pathways between various brain regions. Additionally, VR provides an interactive platform for Neuroscience research by allowing users to experience simulated environments with carefully monitored modifications. Combining these technologies allows for a completely novel, immersive experience where the user interacts with a simulated environment, giving researchers valuable insight into how the brain reacts to different stimuli.

As neuroscience continues to evolve, the potential of psychedelics, AI, and VR to further our understanding of the brain is becoming increasingly apparent. For example, a recent study found that participating in a group virtual reality experience can produce responses similar to those triggered by psychedelics. This suggests that VR could be used as a full-spectrum tool to capitalize on and catalyze the innately therapeutic aspects of psychedelic substances.

Researchers investigate how different environments impact our brains and behavior through AI and VR technology. https://neurosciencenews.com/vr-psychedelics-21412/ The combination of psychedelics, AI, and VR has opened up new possibilities for Neuroscience research. As these technologies continue to advance, we may soon be able to explore the brains inner workings in ways we never thought possible. With this newfound knowledge comes the potential for us to better understand ourselves and develop treatments for various neurological disorders.

Psychedelics can cause confusion, disorientation, anxiety, panic, or paranoia. They also interfere with the perception of reality, which may lead to a distorted sense of self-awareness and blurred boundaries between the users existence and non-ordinary consciousness. It is essential to be aware and prepared for the possible risks before engaging in psychedelic use. The medicalized model is very focused on screening and contraindications.

Virtual Reality creates a simulated environment that immerses users in an artificial world. The risk is that users potentially become so immersed in virtual environments that they lose sight of their in-real-life (IRL) lives, creating psychological issues such as disillusionment with the natural world and addiction to virtual worlds. Again, a supervised approach to preparing, journeying and integration is the key.

Artificial Intelligence: The far-reaching implications for our society are positive and negative. AI can be used as an autonomous decision-making machine that humans do not fully understand or control. We must develop ethical standards for AI applications to ensure their use does not infringe upon our civil liberties or humanitys greater good. Go to this link for more information.

In summary, the research into psychedelics, AI, and VR offers tremendous opportunity for exploring how we think about our relationship with technology itselfthe idea that humanity does not exist separate from technology but is instead intertwined within itas well as promising potential to uncover new treatments for various conditions whether they are physical or mental disorders. Further research into these topics could also help identify ways machine learning algorithms can better interpret human behavior while preserving autonomy and choice over our data bio, psycho and social context resulting in a win-win situation for researchers and end users. But this is not a panacea, and there is a shadow side to these technologies and frameworks that needs careful consideration in their use and application.

Note: This article presents a study, research, and a point of view that should not be taken as advice. The author offers transpersonal coaching and counseling. He is an Interfaith MDiv, Contemplative Psychotherapist and Psychedelic Assisted Therapy Provider, who holds space for therapeutic presence, preparing, integration and the use of digital therapeutics. The Work Mindfulness Project http://www.workmindfulness.com The Mindfulness Experience podcast https://themindfulnessexperience.podbean.com/

Read more:
Healing With Psychedelics, Virtual Reality & Artificial Intelligence - Microdose Psychedelic Insights

Read More..

Google Clouds Sam Sebastian on how the pandemic accelerated the shift to cloud, converting the skeptics, and why Canada is now home – Toronto Star

Much in tech has changed since Sam Sebastian first joined Google in 2006. First, he and his colleagues were explaining newfangled search advertising to customers. Today, the Ohio-born executive is at the forefront of another major leap cloud computing as the Canadian head of Google Clouds operations.

When COVID-19 forced much of the worlds economy into lockdown, the thought of keeping data trapped on office-bound servers was intolerable to many CEOs. Cloud storage boomed, and forced painstaking digital transformations through in a matter of months rather than years.

Google Cloud was fairly well-positioned to capitalize on the sudden demand for off-premises yet adaptable places to store data. In a matter of minutes, companies can quickly scale up their storage to handle an influx of new data, or shed excess capacity. This flexibility is appealing to all three of Google Clouds main cohorts of customers: digital native firms like Lightspeed, stalwarts like Canadian Tire, and next-generation AI-oriented customers like Mobius.

To many average consumers, whether or not their favourite brands rely on the cloud or locally stored data is irrelevant. But firms like Google have made big business from convincing sometimes-reluctant CEOs to shell out for new operating systems capable of retrieving data from anywhere.

Youve bounced to and from multiple executive roles at Google. Your last position was in 2017. What keeps you coming back?

Yeah, Im a boomerang Googler. Theres a few of us around. I started at Google 17 years ago in the U.S. I was in different roles in the States, on the ad side for eight years. About nine years ago, I moved my family to Canada and ran the Canadian business for about three and a half years. At the time, most of our business was ads.

I loved every minute of it. But I had the opportunity to be the CEO of Pelmorex Corp., a big brand that included the Weather Network and MtoMdia. They had a business in Spain Eltiempo. So, after 11 years at Google, an opportunity to go be a CEO of a strong Canadian brand that needed to digitally transform was too good of an opportunity to pass up.

I thought I had a once-in-a-lifetime opportunity, in the early days of Google, to be on the ground floor when search advertising and YouTube was first kicking off. Now, I have the opportunity to come back and almost have a second chance at a once-in-a-million opportunity cloud which is new to many folks. Were on the cusp of this generative AI revolution, which is also tied in with the cloud. To be on another rocket ship, with another kind of revolutionary shift, was just too good to pass up.

Click to expand

Whats it like going back to a very senior position at Google after being a CEO at Pelmorex?

In the end, I have always thought about what I want to do in my career in three ways. Number one, I want to keep learning. As long as Im in a job, and Im learning a whole new set of skills, it doesnt really matter to me what role Im in. Number two, I love to lead. Regardless of whether Im leading an entire organization or a country inside of a larger multinational, so long as Im leading and working with people that inspire me, then Im good. And lastly, I need to add value.

I had never done the CEO role before. I could learn a ton. But I was an ads guy for 30 years. So, at Google, I had an opportunity to learn. I could lead great young Googlers and very experienced Googlers in cloud. I could both learn from them, but also be inspired by them. I knew the playbook for Google on the ad side because I had built out its country infrastructure.

You have a lot of leadership experience, but youre new to cloud computing itself. Are you still learning about it as you go? Are you leaning on other people? How does that work?

We have an incredible team who has made huge investments in this space, from training, evangelizing, and technology. I lean on the team significantly. But its a relatively new space. There are very few veterans in this space because it has only really been mature for a handful of years. My ultimate clients are CEOs, the C-suites and boards, and Im trying to convince them to make these tough decisions to modernize their infrastructure.

And I did that for five years figuring out how I was going to migrate on-premise stuff to the cloud at Pelmorex. We all had this MBA in cloud for two years during COVID, meaning anyone who was running a company had to figure out how to do it from home. COVID really saw demand for cloud services explode. So, to a certain extent, I had been through these wars myself as a C-suite leader.

Some businesses are very skeptical about the benefits of cloud computing. How do you convert them?

There are a couple of ways. Number one, every business has a core function. The core function of Pelmorex was weather forecasting. It was not managing data centres or modernizing technology. Doing so requires a huge set of resources, expertise and skills. To an extent, I can rent that experience and technology, and use it as I need it. Thats the ideal business model for someone who wants to really focus on their core business. When you sit down with a CEO, they will get that right away.

Then you have to go deeper and ask about the objections. They may say a cloud migration will take a long time, its not as secure as on-site storage, or there isnt a specific solution for their industry. But we can counter each of these objections. So we have to talk at the highest level with the CEO to inspire them, and then work inside the organization, and with our partners. COVID was a pretty big demand generator because, all of a sudden, folks had to manage all this stuff remotely, which is a bit more difficult when youre not in the cloud.

The vast majority of digital transformations fail. How are you trying to change that equation?

A couple of things. Digital transformations are huge projects, and any huge project comes with a lot of risk. What we try to do is break that project down, atomize it, and create a bunch of different milestones over time and then put all the right people on various parts of the project. One client, one vendor, one cloud player cant solve everything.

Whenever theres a burning platform, and a company has to succeed, there is no other alternative. COVID was a great example. Youd be amazed at what a company or an industry can do in a matter of months. Now, were trying to leverage that to create a sense of a burning platform, a no excuse but success mentality, so we can push folks to move.

There is a perception that cloud computing is a lot less secure than relying on on-site data storage. What do you say to critics who say to avoid the cloud because it is insecure?

Just look at Google. And you can look at Amazon as well. These are massive companies that built massive infrastructure targeted by the biggest cyber threats, both internally and externally, of any company in the world. And theyve been secure. Weve had to build so much threat detection, security and authentication protocols inside all of our own technology. Now, all were doing is making those same attributes available to customers.

The hard part for customers is that they feel out of control. Once we walk them through how, frankly, theyre more exposed to risks with the work theyre doing on-premises, their objections go away. Some of the biggest threats come from people inside an organization, who have access to a lot of things that they might not otherwise have with the cloud.

A lot of Googlers in executive roles end up going back to San Francisco. Do you think thats in the cards for you?

I dont. That was the thinking when I moved to Canada nine years ago. A lot of times, executives move up here, they do a stint, they learn some things, and they take it back. After four years, the kids loved the country. My wife and I love the country. We have built some great relationships. I had built a profile inside the country so that I could continue to take on new opportunities. And so, our entire perception changed.

Thats why I had no problem leaving Google to go to a Canadian company and get even more experience inside Canada. Now, Ive come back to Google in Canada. Both of my kids are in university in Canada. Weve got no plans to leave. We love it here. And we still have lots of family back in the States, and we go back and forth, obviously. But this is home now.

This conversation has been edited for length and clarity.

Read more here:
Google Clouds Sam Sebastian on how the pandemic accelerated the shift to cloud, converting the skeptics, and why Canada is now home - Toronto Star

Read More..

Does ChatGPT save your data? Here’s what you need to know – Android Authority

Edgar Cervantes / Android Authority

Time is money and chatbots like ChatGPT and Bing Chat have become valuable tools. They can write code, summarize long emails, and even find patterns in large volumes of data. However, as with any free-to-use technology, you may be wondering about the privacy implications of it all. More specifically, does ChatGPT save your data and can you trust it?

So in this article, lets break down ChatGPTs data storage practices and how it handles your sensitive data. Well also detail how to permanently delete your data from ChatGPT and OpenAIs servers.

Does ChatGPT save conversations and user data?

Calvin Wankhede / Android Authority

Yes, OpenAI saves your ChatGPT conversations and prompts for future analysis. According to a FAQ page published by the company, its employees can selectively review chats for safety. In other words, you cant assume anything you say to ChatGPT is kept private or confidential.

All of your conversations with ChatGPT are stored on OpenAI's servers.

Besides prompts and chat conversations, OpenAI also saves other data when you use ChatGPT. This includes account details like your name and email as well as approximate location, IP address, payment details, and device information. Most websites collect this data for analytics purposes so its not unique to ChatGPT. However, it does mean that OpenAI can hand over your ChatGPT conversations and other data to courts or law enforcement.

According to OpenAI, its in-house AI trainers may use your ChatGPT conversations for training purposes. Like any machine learning-based technology, OpenAIs GPT-3.5 and GPT-4 language models were trained on billions of existing text samples. However, these can also be improved further through a process known as fine-tuning, which involves re-training the model on a small dataset (like user chats).

We already know that OpenAI has performed some fine-tuning on the models since it admitted to hiring humans to simulate ideal chat conversations. Now that the chatbot is widely available, its only logical that the company will continue collecting user data to train and improve ChatGPT. You can opt-out if you dont want your data to be used for training but this is a manual process that involves filling out a form.

Does OpenAI or ChatGPT sell user data?

Edgar Cervantes / Android Authority

OpenAI lets anyone use ChatGPT for free, even though generating responses likely costs the company a lot of money. So naturally, you might assume that OpenAI has found a way to sell or monetize your data. Luckily, thats not the case. According to an OpenAI support page, your ChatGPT conversations arent shared for marketing purposes.

As for how OpenAI stores ChatGPT data, the company says that its systems are located in the US. The company also requires anyone accessing your data to sign confidentiality contracts and uphold other security obligations.

Your ChatGPT data isn't sold to advertisers, but OpenAI employees may see it.

So how does a small startup like OpenAI serve millions of users without selling their saved data? In early 2023, Microsoft invested $10 billion in OpenAI. The company already uses OpenAIs GPT-4 language model for many of its own services, including Bing Chat. We also know that ChatGPT exclusively uses Microsofts Azure cloud servers to generate responses.

From all of this, we can infer that OpenAIs server costs are subsidized, which allows the company to continue offering ChatGPT for free. In the future, ChatGPT Plus and other revenue sources could help OpenAI turn a profit without selling user data.

Should you trust ChatGPT with your data?

Calvin Wankhede / Android Authority

In the few months since ChatGPT first became available to the public, it has already fallen victim to a couple of data leaks.

In one instance, a software bug resulted in some users seeing others chat titles when logged in. Luckily, the bug didnt expose full chat histories or other sensitive data. That wasnt the only leak either another one revealed the last four digits of some users saved credit cards. These incidents indicate a tangible risk if ChatGPT does indeed save all user data.

ChatGPT has suffered from data leaks already, but most user data remained safe.

OpenAI has managed to keep full chat records reasonably private and away from prying eyes so far. But that could change at any time in the future if it falls victim to a data breach or intrusion. After all, weve seen successful attacks executed against security-conscious companies like LastPass.

To that end, you should not share sensitive personal information, trade secrets, or medical data with ChatGPT or rival chatbots like Google Bard. In fact, many companies have explicitly clamped down on chatbots for this reason. Samsung Semiconductor, for example, reportedly found its employees had shared confidential information with ChatGPT. It has now imposed a character limit on ChatGPT prompts, making it harder to spill company secrets.

37 votes

Yes

11%

No, but I'll use it anyway

76%

No, I plan to delete my account

14%

How to delete your ChatGPT data

Calvin Wankhede / Android Authority

Its possible you didnt know that ChatGPT saves your conversations and prompts until now. So is there a way to clear all of your interactions with the chatbot? Well, clearing your history when logged into your ChatGPT account only removes the data from your view. It doesnt actually delete anything from OpenAIs servers.

For now, the only way to permanently delete your ChatGPT data is to close your OpenAI account. Heres how to do that:

Once OpenAI goes through with the deletion, all of your ChatGPT data and conversations should be permanently deleted. Keep in mind that this process takes anywhere between one to two weeks. If youd prefer not to log in or visit the help section, you can also send an account closure request to deletion@openai.com.

FAQs

No, you need to delete your OpenAI account to permanently delete your ChatGPT chat history. If you simply clear your chats instead, your data will continue to live on OpenAIs servers.

If you cant see your past chats once logged into your ChatGPT account, you may have cleared your account history or the service may be experiencing heavy demand at the moment.

No, you cannot export conversations from ChatGPT at the moment.

Visit link:
Does ChatGPT save your data? Here's what you need to know - Android Authority

Read More..

EXCLUSIVE: ‘A Chernobyl for AI’ looms if artificial intelligence is kept unchecked, says scientist Stuart Russell – Business Today

Stuart Russell, a Professor of Computer Science at UC Berkeley and a leading expert in artificial intelligence and machine learning, has issued a warning about the potential dangers of unchecked AI development. As the co-author of the standard text in the field of AI, "Artificial Intelligence: A Modern Approach," Russell's credentials are unparalleled. In an exclusive interview with Business Today's Aayush Ailawadi, he emphasizes the need for reasonable guidelines and safety measures to prevent the possibility of a "Chernobyl for AI" a catastrophic event that could have far-reaching consequences.

Russell is one of the prominent AI experts who signed the petition to pause the development of the next powerful iteration of GPT-4. Other prominent voices in the open letter include Tesla CEO, Elon Musk and Apple co-founder Steve Wozniak.

In an exclusive interview with Business Today, he listed the perils of unchecked AI and warned against the potential for "a Chernobyl for AI." Russell, who has been an AI researcher for 45 years, acknowledges the unlimited potential for artificial intelligence to benefit the world but emphasizes the need for reasonable guidelines to ensure its safe development.

Russell explains that developing guidelines for AI systems may take time, but it is necessary to demonstrate convincingly that a system is safe before it can be released. He compares the process to building a nuclear power plant or an airplane, where safety guidelines must be met to prevent catastrophic consequences.

He said, "What we're asking for is, to develop reasonable guidelines. You have to be able to demonstrate convincingly for the system to be safely released, and then show that your system meets those guidelines. If I wanted to build a nuclear power plant, and the government says, well, you need to show that it's safe, that it can survive an earthquake, that it's not going to explode like Chernobyl did." He further added, "we do not want a Chernobyl for AI."

Also read: Elon Musk, Steve Wozniak call for pause on training of AI systems that can outperform GPT-4The scientist warns that without proper guidelines and safety measures, there is a risk of a "Chernobyl for AI," referring to the nuclear disaster that occurred in Ukraine in 1986, which destroyed the nuclear industry and had long-lasting effects on the environment and human health.

Russell acknowledges that it is difficult to predict exactly what a Chernobyl-like disaster for AI might entail, but emphasizes the need to take the possibility seriously. He calls for the application of common sense in the development of powerful AI systems to ensure that they do not pose a threat to society.

Also read:'No regulations for Artificial Intelligence in India': IT Minister Ashwini Vaishnaw

See original here:
EXCLUSIVE: 'A Chernobyl for AI' looms if artificial intelligence is kept unchecked, says scientist Stuart Russell - Business Today

Read More..

H100, L4 and Orin Raise the Bar for Inference in MLPerf – Nvidia

MLPerf remains the definitive measurement for AI performance as an independent, third-party benchmark. NVIDIAs AI platform has consistently shown leadership across both training and inference since the inception of MLPerf, including the MLPerf Inference 3.0 benchmarks released today.

Three years ago when we introduced A100, the AI world was dominated by computer vision. Generative AI has arrived, said NVIDIA founder and CEO Jensen Huang.

This is exactly why we built Hopper, specifically optimized for GPT with the Transformer Engine. Todays MLPerf 3.0 highlights Hopper delivering 4x more performance than A100.

The next level of Generative AI requires new AI infrastructure to train large language models with great energy efficiency. Customers are ramping Hopper at scale, building AI infrastructure with tens of thousands of Hopper GPUs connected by NVIDIA NVLink and InfiniBand.

The industry is working hard on new advances in safe and trustworthy Generative AI. Hopper is enabling this essential work, he said.

The latest MLPerf results show NVIDIA taking AI inference to new levels of performance and efficiency from the cloud to the edge.

Specifically, NVIDIA H100 Tensor Core GPUs running in DGX H100 systems delivered the highest performance in every test of AI inference, the job of running neural networks in production. Thanks to software optimizations, the GPUs delivered up to 54% performance gains from their debut in September.

In healthcare, H100 GPUs delivered a 31% performance increase since September on 3D-UNet, the MLPerf benchmark for medical imaging.

Powered by its Transformer Engine, the H100 GPU, based on the Hopper architecture, excelled on BERT, a transformer-based large language model that paved the way for todays broad use of generative AI.

Generative AI lets users quickly create text, images, 3D models and more. Its a capability companies from startups to cloud service providers are rapidly adopting to enable new business models and accelerate existing ones.

Hundreds of millions of people are now using generative AI tools like ChatGPT also a transformer model expecting instant responses.

At this iPhone moment of AI, performance on inference is vital. Deep learning is now being deployed nearly everywhere, driving an insatiable need for inference performance from factory floors to online recommendation systems.

NVIDIA L4 Tensor Core GPUs made their debut in the MLPerf tests at over 3x the speed of prior-generation T4 GPUs. Packaged in a low-profile form factor, these accelerators are designed to deliver high throughput and low latency in almost any server.

L4 GPUs ran all MLPerf workloads. Thanks to their support for the key FP8 format, their results were particularly stunning on the performance-hungry BERT model.

In addition to stellar AI performance, L4 GPUs deliver up to 10x faster image decode, up to 3.2x faster video processing and over 4x faster graphics and real-time rendering performance.

Announced two weeks ago at GTC, these accelerators are already available from major systems makers and cloud service providers. L4 GPUs are the latest addition to NVIDIAs portfolio of AI inference platforms launched at GTC.

NVIDIAs full-stack AI platform showed its leadership in a new MLPerf test.

The so-called network-division benchmark streams data to a remote inference server. It reflects the popular scenario of enterprise users running AI jobs in the cloud with data stored behind corporate firewalls.

On BERT, remote NVIDIA DGX A100 systems delivered up to 96% of their maximum local performance, slowed in part because they needed to wait for CPUs to complete some tasks. On the ResNet-50 test for computer vision, handled solely by GPUs, they hit the full 100%.

Both results are thanks, in large part, to NVIDIA Quantum Infiniband networking, NVIDIA ConnectX SmartNICs and software such as NVIDIA GPUDirect.

Separately, the NVIDIA Jetson AGX Orin system-on-module delivered gains of up to 63% in energy efficiency and 81% in performance compared with its results a year ago. Jetson AGX Orin supplies inference when AI is needed in confined spaces at low power levels, including on systems powered by batteries.

For applications needing even smaller modules drawing less power, the Jetson Orin NX 16G shined in its debut in the benchmarks. It delivered up to 3.2x the performance of the prior-generation Jetson Xavier NX processor.

The MLPerf results show NVIDIA AI is backed by the industrys broadest ecosystem in machine learning.

Ten companies submitted results on the NVIDIA platform in this round. They came from the Microsoft Azure cloud service and system makers including ASUS, Dell Technologies, GIGABYTE, H3C, Lenovo, Nettrix, Supermicro and xFusion.

Their work shows users can get great performance with NVIDIA AI both in the cloud and in servers running in their own data centers.

NVIDIA partners participate in MLPerf because they know its a valuable tool for customers evaluating AI platforms and vendors. Results in the latest round demonstrate that the performance they deliver today will grow with the NVIDIA platform.

NVIDIA AI is the only platform to run all MLPerf inference workloads and scenarios in data center and edge computing. Its versatile performance and efficiency make users the real winners.

Real-world applications typically employ many neural networks of different kinds that often need to deliver answers in real time.

For example, an AI application may need to understand a users spoken request, classify an image, make a recommendation and then deliver a response as a spoken message in a human-sounding voice. Each step requires a different type of AI model.

The MLPerf benchmarks cover these and other popular AI workloads. Thats why the tests ensure IT decision makers will get performance thats dependable and flexible to deploy.

Users can rely on MLPerf results to make informed buying decisions, because the tests are transparent and objective. The benchmarks enjoy backing from a broad group that includes Arm, Baidu, Facebook AI, Google, Harvard, Intel, Microsoft, Stanford and the University of Toronto.

The software layer of the NVIDIA AI platform, NVIDIA AI Enterprise, ensures users get optimized performance from their infrastructure investments as well as the enterprise-grade support, security and reliability required to run AI in the corporate data center.

All the software used for these tests is available from the MLPerf repository, so anyone can get these world-class results.

Optimizations are continuously folded into containers available on NGC, NVIDIAs catalog for GPU-accelerated software. The catalog hosts NVIDIA TensorRT, used by every submission in this round to optimize AI inference.

Read this technical blog for a deeper dive into the optimizations fueling NVIDIAs MLPerf performance and efficiency.

Read this article:
H100, L4 and Orin Raise the Bar for Inference in MLPerf - Nvidia

Read More..