Category Archives: Deep Mind

AI as powerful as human mind possible in next 5 years, says Google DeepMind CEO Demis Hassabis – The Indian Express

AI has been creating headlines all over the world. From brisk-paced developments to a section of dignitaries seeking a moratorium, AI has been causing a stir in the world of technology. The most prominent of all fears is AI becoming as powerful as humans and potentially overpowering mankind.

You have exhausted your monthly limit of free stories.

To continue reading,simply register or sign in

Read this story with a special discount on our digital access plan. Now at just Rs 100 per month.

This premium article is free for now.

Register to continue reading this story.

This content is exclusive for our subscribers.

Subscribe to get unlimited access to The Indian Express exclusive and premium stories.

This content is exclusive for our subscribers.

Subscribe now to get unlimited access to The Indian Express exclusive and premium stories.

Now, it seems the days are not far. Recently, in an interview with The Wall Street Journal, Google DeepMind CEO Demis Hassabis said that AI could achieve human-level cognitive abilities in the next five years. He was speaking at the media outlets event Future of Everything Festival. Hassabis also said that the speed of AI research could also accelerate from its current rapid pace.

Hassabis also acknowledged that the rapid developments in the last few years have been incredible. He added that he did not see any reason for the developments to slow down. I dont see any reason why that progress is going to slow down. I think it may even accelerate. So I think we could be just a few years, maybe within a decade away, he was quoted as saying by the new agency.

As of now, artificial general intelligence is a topic of contention in the AI community. However, Hassabis asserted that we could see very general systems in the next few years.

In April, DeepMind and the Brain team from Google Research united as Google DeepMind. The collaboration of the two companies is aimed at enhancing progress towards a world where AI can help deal with the biggest challenges faced by humanity.

DeepMind, an AI research start-up was founded by Demis Hassabis, Shane Legg, and Mustafa Styleman in 2010. Much before Google acquired it, Facebook was in talks to acquire the start-up. The company developed machine learning systems with the help of deep neural networks and models that were based on neuroscience.

The latest merger between DeepMind and Google Brain is expected to significantly accelerate Googles progress in AI.

IE Online Media Services Pvt Ltd

First published on: 03-05-2023 at 18:01 IST

More here:
AI as powerful as human mind possible in next 5 years, says Google DeepMind CEO Demis Hassabis - The Indian Express

Deep Sleep May Mitigate Effects Of This Alzheimer’s Risk Factor … – mindbodygreen

For this study, researchers wanted to dig into the connection between sleep, memory, and beta-amyloid depositsone of the primary drivers of Alzheimer's disease. To do so, they studied a small sample size of participants who did not have Alzheimer's, half of whom had high amounts of amyloid deposits.

The participants' brain waves were monitored while they slept in a lab using an electroencephalography (EEG) machine. Upon waking, they completed tasks that tested their memory.

Sure enough, among participants with high amounts of amyloid deposits, those who had more deep sleep performed better on the memory tasks than those who didn't sleep as well.

As such, the researchers believe these findings point to deep sleep as a protective factor against memory decline in those with amyloid depositseven in those who have not been diagnosed with Alzheimer's.

As senior author of the study Matthew Walker, Ph.D., explained in a news release, "Think ofdeep sleepalmost like a life raft that keeps memory afloat rather than memory getting dragged down by the weight of Alzheimer'sdisease pathology." He adds that it now seems that deep NREM sleep "may be a new, missing piece in the explanatory puzzle of cognitive reserve."

Read more from the original source:
Deep Sleep May Mitigate Effects Of This Alzheimer's Risk Factor ... - mindbodygreen

Vint Cerf on the exhilarating mix of thrill and hazard at the frontiers of tech – TechCrunch

Image Credits: KENZO TRIBOUILLARD/AFP via Getty Images / Getty Images

Vint Cerf has been a near-constant influence on the internet since the days when he was helping create it in the first place. Today he wears many hats, among them VP and chief internet evangelist at Google. He is to be awarded the IEEEs Medal of Honor at a gala in Atlanta, and ahead of the occasion he spoke with TechCrunch in a wide-ranging interview touching on his work, AI, accessibility and interplanetary internet.

TechCrunch: To start out with, can you tell us how Google has changed in your time there?

Cerf: Well, when I joined the company in 2005, there were 5,000 people already, which is pretty damn big. And of course, my normal attire is three piece suits. The important thing is that I thought I would be raising the sartorial quotient of the company by joining. And now, almost 18 years later, there are 170-some-odd thousand people, and I have failed miserably. So I hope you dont mind if I take my jacket off.

Go right ahead.

So as you might have noticed, Sergey has come back to do a little bit more on the artificial intelligence side of things, which is something hes always been interested in; I would say historically, weve always had an interest in artificial intelligence. But that has escalated significantly over the past decade or so. The acquisition of DeepMind was a brilliant choice. And you can see some of the outcomes first of the spectacular stuff, like playing Go and winning. And then the more productive stuff, like figuring out how 200 million proteins are folded up.

Then theres the large language models and the chatbots. And I think were still in a very peculiar period of time, where were trying to characterize what these things can and cant do, and how they go off the rails, and how do you take advantage of them to do useful work? How do we get them to distinguish fact from fiction? All of that is in my view open territory, but then thats always an exciting place to be a place where nobodys ever been before. The thrill of discovery and the risk of hazard create a fairly exciting mix an exhilarating mix.

You gave a talk recently about, I dont want to say the dangers of the large language models, but

Well, I did say there are hazards there. I was talking to a bunch of investment bankers, or VCs, and I said, you know, dont try to sell stuff to your investors just because its flashy and shiny. Be cautious about going too fast and trying to apply it without figuring out how to put guardrails in place.

I raised a question of hazard and wanting people to be more thoughtful about which applications made sense. I even suggested an analogy: you know how the Society of Automotive Engineers, they have different risk levels for the self driving cars a risk level idea could apply to artificial intelligence and machine learning.

For entertainment purposes, perhaps its not too concerning, unless it goes down some dark path, in which case, you might want to put some friction into the system to deal with that, especially a younger user. But then, as you get to the point where youre training these things to do medical diagnosis or make investment advice, or make decisions about whether somebody gets out of jail now suddenly, the risk factors are extremely high.

We shouldnt be unaware of those risk factors. We can, as we build applications, be prepared to detect excursions away from safe territory, so that we dont accidentally inflict some harm by the use of these kinds of technologies.

So we need some kind of guardrails.

Again, Im not expert in this space, but I am beginning to wonder whether we need something kind of like that in order to provide a super-ego for the natural language network. So when it starts to go off the rails somewhere, we can observe that thats happening. And a second network thats observing both the input and the output might intervene, somehow, and stop the the production of the output.

Sort of a conscience function?

Well, its not quite conscience, its closer to executive function the prefrontal cortex. I want to be careful, Im only reasoning by metaphor here.

I know that Microsoft has embarked on something like this. Their version of GPT-4 has an intermediary model like that, they call it Prometheus.

Purely as an observation, I had the impression that the Prometheus natural language model would detect and intervene if it thought that the interactions were going down with dark path. I thought that they would implement it in such a way that before you actually say something to the interlocutor that is going down the dark path, you intervene and prevent it from going there at all.

My impression, though, is that it actually produces the output and then discovers that its produced it, but and then it says, Oh, I shouldnt have done that. Oh, dear, I take that back, or I dont want to talk to you anymore about that. Its a little bit like the email that you get occasionally from the Microsoft Outlook system that says, This person would like to withdraw the message.

I love when that happens it makes me want to read the original message so badly, even if I wouldnt have before.

Yeah, exactly. Its sort of like putting a big red flag in there saying, boy theres something juicy in here.

You mentioned the AI models, that its an interesting place to work. Do you get the same sort of foundational flavor that you got from working on protocols and other big shared things over the years?

Well, what we are seeing is emergent properties of these large language models, that are not necessarily anticipated. And there have been emergent properties showing up in the protocol world. Flow control in particular is a vast headache in the online packet switch environment, and people have been tackling these problems inside and outside of Google for years.

One of the examples of emergent properties that I think very few of us thought about is the domain name business. Once they had value, suddenly, all kinds of emergent properties show up, people with interests that conflict and have to be resolved. Same for internet address space, its an even more weird environment where people actually buy IPv4 addresses for like $50 each.

I confess to you that as I watched the auctions for IPv4 address space, I was thinking how stupid I was. When I was at the Defense Department in charge of all this, I should have allocated the slash eight, which is 16 million addresses, to myself, and just sit on it, you know, for 50 years, then sell it and retire.

Even simple systems have the ability to surprise you. Especially when you have simple systems when a large number of them are interacting with each other. Ive found myself not necessarily recognizing when these emergent properties will come, but I will say that whenever something gets monetized, you should anticipate there will be emergent properties and possibly unexpected behavior, all driven by greed.

Let me ask you about some some other stuff youre working on. Im always happy when I see cutting-edge tech being applied to people who need it, people with disabilities, people who like just have not been addressed by the current use cases of tech. Are you still working in the accessibility community?

I am very active in the accessibility space. At Google, we have a number of what we call employee resource groups, or ERGs. Yeah, some of them I, executive sponsor for one for Googlers who have hearing problems. And there is a disabilities oriented group, which involves employees who either have disabilities or family members that have disabilities, and they share their stories with each other because often people have similar problems, but dont know what the solutions were for other people. Also, its just nice to know that youre not alone in some of these challenges. Theres another group called the Grayglers for people that have a little gray in their hair, and Im the executive sponsor for that. And of course, the focus of attention there is the challenges that arise as you get older, even as you think about retirement and things like that.

When a lot of so-called Web 2.0 stuff came out 10 years ago, it was totally inaccessible, broke all the screen readers, all this kind of stuff. Somebody has to step in and say, look, we need to have this standard, or else youre leaving out millions of people. So Im always interested to hear about what interesting projects or organizations or people are out there.

What I have come to believe is that engineers, being just given a set of specs that say if you do it this way, it will meet this level of the standard that doesnt necessarily produce intuition. You really have to have some intuition in order to make things accessible.

So Ive come to the conclusion that what we really need is to show people examples of something which is not accessible, and something that is, and let them ingest as many examples as we can give them, because their neural networks will eventually figure out, what is it about this design that makes it accessible? And how do I apply that insight into the next design that I do? So, seeing what works and what doesnt work is really important. And you often learn a lot more from what doesnt work than you do from what does.

Theres a guy named Gregg Vanderheiden, whos at the University of Maryland, he and I did a two-day event [the Future of Interface Workshop] looking at research on accessibility and trying to frame what this is going to look like over the next 10 or 20 years. It really is quite astonishing what the technology might be able to do to act as an augmenting capability for people that that need assistance. Theres great excitement, but at the same time great disappointment, because we havent used it as effectively as I think we could have. Its kind of like how Alexander Graham Bell invented a telephone that cant be used by people who are deaf, which is why he was working on it in the first place.

It is a funny contradiction of priorities. One thing where I do see some of the the large language and multimodal AI models helping out is that they can describe what they are seeing, even if you cant see it. I know that one of GPT-4s first applications was in an application for blind people to view the world around them.

Were experiencing something close to that right this minute. Since I wear hearing aids, Im making use of the captioning capability. And at the moment since this is Zoom rather than a Google Meet, there isnt any setting on this one for closed captioning. Im exercising the Zoom application through the Chrome browser, and Google has developed a capability for the Chrome browser to detect speech in the incoming sound.

So packets are coming in and theyre known to be sound, it passes through an identification system that produces a caption bar, which you can move around on the screen. And thats been super helpful for me. For cases like this, where the application doesnt have captioning, or for random video streaming video that might be coming in and hasnt been captioned, the caption window automatically pops up. In theory, I think we can do this in 100 different languages, although I dont know that weve activated it for more than four or five. As you say, these tools will become more and more normal, and as time goes on, people will expect the system to adapt to their needs.

So language translation, and speech recognition is quite powerful, but I do want to mention something that I found vaguely unsettling. Recently, I encountered an example of a conversation between a reporter and a chatbot. But he chose deliberately to take the output of the chat bot and have it spoken by the system. And he chose the style of a famous British explorer [David Attenborough].

The text itself was quite well formed, but coming with Attenboroughs accent just added to the weight of the assertions even when they were wrong. The confidence levels, as Im sure youve seen, are very high, even when the thing doesnt know what its talking about.

The reason I bring this up is that we are allowing in these indicators of, how should we say this, of quality, to fool us. Because in the past, they really did mean it was David Attenborough. But here its not, its just his voice. I got to thinking about this, and I realized there was an ancient example of exactly this problem that showed up 50 years ago at Xerox PARC.

They had a laser printer, and they had the Alto workstation, and the Bravo text editor, it meant the first draft of anything you type to be printed out beautifully formatted with lovely forms and everything else. Normally, you would never see that production quality until after everything had been edited, you know, wrestled with by everybody to get the text formatted, picture-perfect stuff. That meant the first draft stuff came out looking like it was final draft. People didnt didnt understand that they were nuts, that they were seeing first-round stuff, and that it wasnt complete, or necessarily even satisfactory.

So it occurred to me that weve reached a point now where technology is fooling us into giving it more weight than it deserves, because of certain indicia that used to be indicative of the investment made in producing it. And Im not quite sure what to do about that.

I dont think anyone is!

I think somehow or another, we need to make it clear what the provenance is of the thing that were looking at. Like how we needed to say this is first-draft material, you know, dont make any assumptions. So provenance turns out to be a very important concept, especially in a world where we have the ability to imbue content with attributes that we would normally interpret in one way. Like, its David Attenborough speaking, and we should listen to that. And yet, which have to be, we have to think more critically about them. Because in fact, the attribute is being delivered artificially.

And perhaps maliciously.

Certainly that too. And this is why critical thinking has become an important skill. But it doesnt work very well, unless you have enough information to understand the provenance of the material that youre looking at. I think we are going to have to invest more in provenance and identity in order to evaluate the quality of that which we are experiencing.

I wanted to ask you about interplanetary internet, because that whole area is extremely interesting to me.

Well, this one, of course, gets started way back in 1998. But Im a science fiction reader from way back way to age 10 or something, so I got quite excited when it was possible to even think about the possibility of designing and building a communication system that would span the solar system.

The team got started very small, and now 25 years later involves many of the space agencies around the world: JAXA, the Korean Space Agency, NASA and so on. And a growing team of people who are either government funded to do space-based research, or volunteers. Theres a special interest group called the interplanetary networking Special Interest Group, which is part of the Internet Society that thing got started in 1998. But it has now grown to like 900 people around the world who are interested in this stuff.

Weve standardized this stuff, were on version seven of it, were running it up in the International Space Station. Its intended to be available for the return to the moon and Artemis missions. Im not going to see the end result of all this, but Im going to see the first couple of chapters. And Im very excited about that, because its not crazy to actually think about. Like all my other projects, it takes a long time. Patience and persistence!

For something like this it must have been a real challenge, but also a very familiar one. In some ways building something like this is what youve been doing your whole career. This is just a different set of restraints and capabilities.

You put your finger on it, exactly right. This is in a different parametric space than the one that works for TCP/IP. And were still bumping into some really interesting problems, especially where you have TCP/IP networks running on the moon, for example, locally and interconnecting with other internets on other planets, going through the interplanetary protocol. What does that look like? You know, which IP addresses should be used? We have to figure out, well, how the hell does the Domain Name System work in the context of internets that arent on the planet? And its really fun!

See the article here:
Vint Cerf on the exhilarating mix of thrill and hazard at the frontiers of tech - TechCrunch

Calm your mind with box breathing: A deep relaxation technique with great benefits – Sportskeeda

Modified May 04, 2023 11:05 GMT

Feeling stressed lately? Give box breathing a try, and see how this powerful breathing technique helps clear the mind and relax the body.

Box breathing, also called square breathing or four-square breathing, is a deep breathing method that helps slow down your breathing and returns it to its normal rhythm.

It's a simple yet very effective relaxation technique that may help improve focus, ease stress, and relax your overall mind and body after a stressful experience.

The best part about this relaxation technique is that its quick to learn and easy to perform and can be beneficial in coping with stressful situations. People may find box breathing exercise particularly useful if they have anxiety, depression or a lung disease like COPD.

Four-square breathing is quite an easy exercise that you can do in any quiet place. It typically involves four basic steps, with each step lasting for four seconds.

The steps include:

Step 1: Breathing in

Step 2: Holding your breath

Step 3: Breathing out

Step 4: Holding your breath

Start seated on a chair with your back straight, and place your feet on the floor. You may also sit on the floor with your back supported against a wall. Close your eyes.

Step 1: Breathe in through your nose, and count to four. As you do that, feel the air entering your lungs.

Step 2: Hold your breath while counting to four in your head.

Step 3: Breathe out slowly while counting to four.

Step 4: Again, hold your breath as you breathe out, and count to four in your head.

You should repeat the first three steps for at least for four seconds or till you feel relaxed and calm. However, if you find it difficult, try them for a count of two and gradually move up to four. Once youve mastered the technique, go up to five or six counts.

Box breathing offers several benefits and can be used as a technique to calm the mind and relax your body.

According to experts, there's evidence that shows that deep breathing techniques like square breathing can regulate and calm the autonomic nervous system. Read on to learn about the benefits of square breathing:

Square breathing, when performed regularly and correctly, can ease symptoms of physical stress.

Studies suggest that deep breathing exercises can significantly work on cortisol (the major hormone associated with stress) and reduce their production, which helps lower down stress and relaxes the body and mind.

Square breathing is also said to be really effective on mental and emotional well-being. Studies show that breathing techniques can be highly helpful in reducing depression, anxiety and stress.

Another great benefit of box breathing is its ability to improve overall focus and provide mental clarity. In fact, studies have shown that deep breathing techniques can provide a positive outlook towards stress and stressful situations.

Studies also suggest that square breathing and other breathing techniques, like meditation, can significantly change how the body reacts to stress. Moreover, it can also change your future reaction to stress and help you cope with it in a more positive way.

Box breathing exercise can also:

Overall, square breathing can be the best and most effective way to work on your mental and emotional well-being. With only four easy steps, it can help you refocus on yourself on a stressful and hectic day.

When done regularly, this breathing technique can add more relaxation and calmness to your physical and mental health.

Read the rest here:
Calm your mind with box breathing: A deep relaxation technique with great benefits - Sportskeeda

Microsoft: Brace For Pain As FOMO Buyers Rushed In (NASDAQ … – Seeking Alpha

David Becker

The Microsoft Corporation (NASDAQ:MSFT) AI hype train has moved into full throttle as momentum buyers piled further into MSFT following its recent earnings release.

As such, MSFT is on track to re-test its April 2022 highs, as CEO Satya Nadella intensified the battle against his hyperscaler rivals, Google (GOOGL) (GOOG) and Amazon Web Services or AWS (AMZN) further.

Accordingly, Nadella is confident that "Azure has the most powerful AI infrastructure" in the market. As such, the company has seen significant growth in adoption by companies of its AI services over the past quarter.

Management highlighted that Azure OpenAI service has gained "more than 2,500 customers, up 10x quarter-over-quarter." As such, Microsoft's aggressive approach to winning the cloud computing battle through its generative AI infrastructure is gaining significant momentum.

Wall Street analysts are generally optimistic about Microsoft's approach. Wedbush sees Microsoft "to be leading the tech AI arms race." Microsoft has infused its AI advantage into its leading products, encouraging adoption and share gains

Microsoft's approach has been so successful that even Meta Platforms (META) CEO Mark Zuckerberg declared recently that he "expects generative AI to have an impact on every one of Meta's apps and services."

Hence, we aren't surprised that momentum buyers have rushed into MSFT, seeing the potential for a decisive breakout that could bolster its recovery further.

Notably, Microsoft's successful partnership with OpenAI has forced Google to combine DeepMind and Google Brain into a single unit, codenamed: Google DeepMind.

DeepMind CEO Demis Hassabis is reportedly close to Google co-founder Larry Page, as DeepMind has regularly fought for independence and against interference from Google management.

Therefore, Google's AI reorganization has demonstrated that Google has likely lost the initiative to Microsoft due to its "intense rivalry" in furthering its AI ambitions. As Stratechery's Ben Thompson aptly articulated:

Google's AI dynamics are challenging because the company is mainly trying to protect its position. Microsoft is focused on future gains, while Google is focused on potential losses. - Stratechery

The Redmond-based Microsoft is also moving toward monetizing its AI services, as analysts parsed the revenue uplift from its leadership against its arch-rivals.

Management accentuated that the company is looking to "increasingly monetize" its copilot, embedded across its tech stack. As such, Microsoft focuses on adding "a lot of value and productivity improvement," proving the use cases for its customers as they optimize their spending.

Hence, Microsoft is astutely using its AI tools to help its customers consolidate their cloud spending and vendors. Moreover, Microsoft has an ace up its sleeve, as a former leading AWS executive was reportedly no longer bound by his non-compete agreement with Amazon.

Insider reported recently that Charlie Bell (rumored to have lost the AWS CEO role to Adam Selipsky) is ready to be "unleashed" by Microsoft against Amazon.

Bell brought on his Amazonian culture to help improve Microsoft's cybersecurity product development cadence. Accordingly, Insider reported Bell was instrumental in developing Security Copilot, "which took only a matter of months to go from idea to launch, a much faster timetable than is typical at Microsoft."

Little wonder Nadella seems satisfied with the advantage that Microsoft is gaining in its security segment, helping the company to gain momentum amid the current vendor consolidation.

As such, Nadella highlighted that Microsoft "once again took share across all major categories," demonstrating the company's solid execution.

MSFT quant factor ratings (Seeking Alpha)

As a result of the post-earnings surge, MSFT's valuation is looking increasingly unattractive.

Seeking Alpha Quant reflected an F grade for MSFT, the worst possible. Hence, investors who didn't buy MSFT at its peak pessimism levels previously shouldn't jump on board now.

We also assessed that MSFT's price action looks highly unconstructive now, as momentum buyers are looking greedy, going into FOMO mode.

With that in mind, dip buyers are encouraged to continue staying on the sidelines and ignore chasing the upward surge.

Rating: Hold (Reiterated).

Important note: Investors are reminded to do their own due diligence and not rely on the information provided as financial advice. The rating is also not intended to time a specific entry/exit at the point of writing unless otherwise specified.

We Want To Hear From You

Have additional commentary to improve our thesis? Spotted a critical gap in our thesis? Saw something important that we didnt? Agree or disagree? Comment below and let us know why, and help everyone in the community to learn better!

Here is the original post:
Microsoft: Brace For Pain As FOMO Buyers Rushed In (NASDAQ ... - Seeking Alpha

Microsoft is shoving Edge down peoples throats, like its the bad old days again – Fortune

Microsoft has sparked alarm with an upcoming Outlook for Windows feature relating to links in emailsclick on them, and they will open up a new tab in Microsofts Edge browser, even if your default browser is Googles Chrome or Mozillas Firefox. The same feature will be coming to Microsoft Teams.

As system administrators have been repeatedly noting in afiery Reddit thread(reported on byThe Verge), this disrespecting of users browser choices is a real blast from the past.

Back in 2007the year Apple released the first iPhone, though it was still very much a PC-first worldthe Norwegian browser firm Opera complained to the EU antitrust authority about Microsoft bundling its internet Explorer browser with every copy of Windows. Microsoft had recently had anexpensive run-inwith the regulator over its bundling of Windows Media Player with Windows, so, in 2009, it settled the browser criticism by agreeing to give people a choice of browsers that they could set up as their default when firing up a new copy of Windows.

The Microsoft that kept getting nailed for anticompetitive practices was meant to be gone nowa relic of the combative Bill Gates and Steve Ballmer days. Satya Nadellas Microsoft wassupposed to be a cooler, calmer entity. But the company is increasingly giving off retro vibes.

The most obvious indicator of Microsofts new phase is its aggressive,arguably impetuousbehavior in the A.I. space, where its become locked in a desperate race against Google that the latter companydoesnt seem to wantto be part of, just yet. But there are also signs of Redmond returning to its old, anticompetitive ways.

British regulators aresniffing around both Microsoft and Amazon, worried that their terms for cloud customers are designed to discourage switching to other providers. The U.K.s antitrust authority justnixed Microsofts $69 billion takeover of Activision-Blizzardbecause it thought Microsoft would use the deal to dominate the nascent cloud-gaming market. The European Commission is reportedly considering a fresh antitrust probe over Microsofts bundling of Teams with its Office suite (Slack complained) and is also looking into aseparate complaint(from Amazon and others) about Microsofts terms for running its software on non-Microsoft clouds.

On its own, Microsofts move to shove Edge down the throats of Outlook and Teams users is annoying, but perhaps defensible if you consider that it enables a new side-by-side experience that lets users see the emails or chats that contained the opened link in an Edge sidebar. However, in the overall context, its another small blow to the notion that Microsoft is a different, friendlier beast these days.

Want to send thoughts or suggestions toData Sheet?Drop a linehere.

David Meyer

Data Sheets daily news section was written and curated by Andrea Guzman.

Senior TikTok official to leave company. Just months after a promotion, Eric Han, the leader of TikToks U.S. trust and safety operations for years, will depart this month. The Verge reports that in December, TikTok combined its global trust and safety team with its U.S.-based trust and safety group, which Han had led from Los Angeles. Han had wanted a role leading that combined team, former employees say. Instead, he was given the role of head of trust and safety for the new U.S. data security team. The team was made to address national security concerns, but Han reportedly felt it left him vulnerable to being a scapegoat.

Amazon Web Services installs hiring controls. All potential hires for Amazons cloud business must now be filtered through Roster, a headcount management system that ensures they are matched with a specific open position. And in a separate internal recruiting tool, Amazon is enabling more features aimed at preventing offers unless theres a match in Roster, a memo viewed by Fortune says.The move comes as Amazon faces its lowest level of sales growth in years and CEO Andy Jassy looks to A.I. for a boost of the cloud division.

DeepMind CEO shares optimism on AGI. Computers with human-level cognitive abilities could be achievable within a few years, says Google DeepMind CEO Demis Hassabis. Some top A.I. executives such as OpenAI CEO Sam Altman have shared their aims to develop the technology while others dont see a possible route to achieving it. But during an event held by the Wall Street Journal yesterday, Hassabis said its possible it could happen within the next decade with cautious development using the scientific method, where you try and do very careful controlled experiments to understand what the underlying system does, Hassabis said.

Using Zillow and any assumptions you need to make, find me the best place where I can renovate a house to add a Giant Mech Fighting Battle Dome. These will be really giant mechs fighting with giant rockets.

Wharton School of the University of Pennsylvania professor Ethan Mollick, who was one of the select users to try out ChatGPTs new plug-ins, which include the real estate website Zillow. ChatGPT gave Mollick three sites in rural Nevada.

Cheggs shares tumbled nearly 50% after the edtech company said its customers are using ChatGPT instead of paying for its study tools, by Prarthana Prakash

DeepMind cofounders new A.I. chatbot is a good listener. And thats about it. Is that enough? by Jeremy Kahn

Apple cofounder Steve Wozniak isnt scared of A.I.but he believes itll be used by horrible people to do evil things,by Eleanor Pringle

Charles Schwabs daughter thinks Americans need to know more about moneyand is cautious of crypto, by Anna Tutova

The top 3 lessons Bill Gates taught this former Microsoft VP, by Eleanor Pringle

Snap CEO Evan Spiegel wiped out over $10 million in student loans for an entire graduating class. A year later, 3 grads share how it changed their lives, by Jane Thier

Google rolls out passkeys. Everyone with Google accounts can now use passkeys, which allow users to sign in to websites and apps using biometrics or codes used to unlock devices. It comes a year after Google, Apple, Microsoft, and the FIDO Alliance announced theyre teaming up so that users can log in without passwords.

Its been an aim for Google long before then. In a release announcing the news, Google said the launch is a big step in across-industry effort that we helped start more than 10 years ago, and we are committed to passkeys as the future of secure sign-in, for everyone. While passwords are still optional, passkeys work so that devices can only share a signature with Google websites and apps, and not with phishing intermediaries. That way, users dont have to be as watchful with where theyre used as people are with passwords or SMS verification codes. The data shared with Google is the public key and the signature, which dont contain biometric information.

Go here to read the rest:
Microsoft is shoving Edge down peoples throats, like its the bad old days again - Fortune

Team Led by Columbia University Wins $20M NSF Grant to Develop AI Institute for Artificial and Natural Intelligence – Newswise

Newswise New York, NYMay 5, 2023The National Science Foundation (NSF) announced today that it is awarding $20 million to establish the AI Institute for ARtificial and Natural Intelligence (ARNI), an interdisciplinary center led by Columbia University that will draw together top researchers across the country to focus on a national priority: connecting the major progress made in artificial intelligence (AI) systems to the revolution in our understanding of the brain.

Collaborative partnerships

ARNI is a collaboration between Columbia, Baylor College of Medicine, City University of New York, Harvard, Princeton, Howard Hughes Medical Institute, Mila Quebec AI Institute, Tuskegee University, the University of Pennsylvania, and UTHealth Houston. Industry partners include Amazon, DeepMind, Google, IBM, and Meta, and outreach partners include the Neuromatch Academy and the New York Hall of Science. In addition to receiving NSF funding, ARNI is funded by a partnership between NSF and the Office of the Under Secretary of Defense for Intelligence and Security (R&E).

The National Science Foundation has long been a strong supporter of research at Columbia University and we are very excited about this new collaboration, said Mary Boyce, Provost, Columbia University. TheAI Institute for Artificial and Natural Intelligence draws not only on our interdisciplinary strengths throughout the University but also our partnerships -- both old and new -- across the country. By bringing together the amazing progress being made in AI systems and our growing understanding of the brain, ARNI will ignite advances in both neuroscience and AI, and transform our world in the next decade.

Revolution in neuroscience, cognitive science, and AI research

The past 10 years have seen spectacular progress in interrogating neural activity, circuitry, and learning, yet our neuroscience insights have so far informed AI only superficially. Conversely, our rapidly advancing AI methods and systems based on massive amounts of data have only begun to impact neuroscience. ARNI will meet the urgent need for new paradigms of interdisciplinary research between neuroscience, cognitive science, and AI. This will accelerate progress in all three fields and broaden the transformative impact on society in the next decade.

ARNI is anambitious plan that requires a dedicated effort across institutions, and we have assembledone of the strongest groups of investigators in theoretical neuroscience and foundational machine learning in the world, said Jeannette Wing, Executive Vice President for Research, Columbia University. Our PIs are building on existing, and often tightly interacting, neural and AI groups at Columbia, Baylor, Penn, together with Janelia, MILA, Google/DeepMind, and Meta. At the same time, we are building new bridges to Tuskegee, CUNY, Yale, IBM, and beyond. Our track record is already strong and now, thanks to the National Science Foundation, we expect ARNIto meet the urgent need for new paradigms of interdisciplinary research between neuroscience and AI."

Research team

ARNI will be led by Principal Investigators (PIs)Richard Zemel,Kathleen McKeown, andChristos Papadimitriou (Computer Science, Columbia Engineering),Liam Paninski (Zuckerman Institute and Statistics and Neuroscience Departments, Columbia University), andXaq Pitkow (Baylor College of Medicine, Rice University). These PIs bring together expertise from a wide variety of disciplines, including artificial intelligence, theoretical computer science, statistics, neuroscience, physics, and cognitive science. They will work with a large team of researchersto tackle the limitations and challenges of current machine learning systems, including learning with limited data, reasoning about causality and uncertainty, and lifelong learning--all hallmarks of biological systems--while also pushing the boundaries of our understanding of how brains compute and learn.

Bridging the gaps between artificial and biological networks

ARNI will bridge the current significant gaps between artificial and biological networks and make room for a broad, diverse range of applications, from the industrial sector, such as robust, interpretable medical decisions and smarter home assistants; to societal applications, such as better social safety nets and assistive multimodal systems to help the vulnerable; to scientific discoveries such as providing hypotheses about brain function and creating powerful tools for extracting insights from massive data.

Thanks to new AI algorithms, our knowledge of neuroscience and cognitive science expands every day, said Shih-Fu Chang, Dean of Columbia Engineering. And with our growing knowledge of the brain and cognitive science, we have better AI algorithms, making progress on important applications that impact our world. ARNI aims to overcome current limitations in AI while also introducing modern AI into neuroscience, foundational machine learning, and cognitive science. Engineers are pivotal for applying scientific insights to real-world problems, and we look forward to the groundbreaking discoveries that will come from this exciting large-scale collaboration. We are grateful to the National Science Foundation for helping us create this modern cross-disciplinary arsenal, converging to generate new insights and advance this very important, emerging field.

Trustworthy systems

Richard Zemel, the Director of ARNI and the Trianthe Dakolias Professor of Engineering and Applied Science at Columbia Engineering, has been integral in the development of AI technology, most recently as the co-founder and Research Director of the Vector Institute for Artificial Intelligence. His research spans machine learning and its interaction with neuroscience and cognitive science, as well as robust and fair machine learning. He noted that robust and fair machine learning is critical for using these new AI tools to improve society.

A key characteristic of our approach is a focus on developing interpretable models, often based on causal approaches, that are cognitively grounded, given our research on the brain, Zemel said. This will lead to the development of trustworthy systems that can explain their reasoning to end users in terms they understand. This is critical in high-stakes applications such as healthcare, law, and in support of vulnerable populations.

Education and outreach

The institute will provide educational and research opportunities for undergraduate and graduate students, as well as postdoctoral trainees, within and at the interface of AI, neuroscience, and cognitive science. Outreach partners, including the Neuromatch Academy and the New York Hall of Science, will help inform the public of these new developments and teach critical skills to the next generation of students.

Columbia Engineering

Since 1864, theFu Foundation School of Engineering and Applied Science at Columbia University has been a resource to the world for major advances in human progress. Today, Columbia Engineering is a leading engineering school and a nexus for high-impact research. Embedded in New York City, the School convenes more than 250 faculty members and more than 6,000 undergraduate and graduate students from around the globe to push the frontiers of knowledge and solve humanitys most pressing problems.

Zuckerman Institute

In collaboration with Columbia'sZuckerman Institute, the ARNI team includes leading senior investigators and visionaries in the field of theoretical and cognitive neuroscience. The Zuckerman Institute brings together diverse researchers whose expertise spans a wide range of interdisciplinary neuroscience research areas, providing an unsurpassed intellectual environment, multi-level support, and opportunities for interaction.

NSF

The U.S. National Science Foundation propels the nation forward by advancing fundamental research in all fields of science and engineering. NSF supports research and people by providing facilities, instruments, and funding to support their ingenuity and sustain the U.S. as a global leader in research and innovation. With a fiscal year 2022 budget of $8.8 billion, NSF funds reach all 50 states through grants to nearly 2,000 colleges, universities, and institutions. Each year, NSF receives more than 40,000 competitive proposals and makes about 11,000 new awards. Those awards include support for cooperative research with industry, Arctic and Antarctic research and operations, and U.S. participation in international scientific efforts.www.nsf.gov

See the rest here:
Team Led by Columbia University Wins $20M NSF Grant to Develop AI Institute for Artificial and Natural Intelligence - Newswise

AI Is Helping Us Read Minds, But Should We? – HT Tech

Since mind reading has only existed in the realms of fantasy and fiction, it seems fair to apply the phrase to a system that uses brain scan data to decipher stories that a person has read, heard, or even just imagined. Its the latest in a series of spooky linguistic feats fueled by artificial intelligence, and its left people wondering what kinds of nefarious uses humanity will find for such advances.

Even the lead researcher on the project, computational neuroscientist Alexander Huth, called his teams sudden success with using noninvasive functional magnetic resonance imaging to decode thoughts kind of terrifying in the pages of Science.

But whats also terrifying is the fact that any of us could come to suffer the horrific condition the technology was developed to address paralysis so profound that it robs people of the ability even to speak. That can happen gradually through neurological diseases such as ALS or suddenly, as with a stroke that rips away all ability to communicate in an instant. Take for example, the woman who described an ordeal of being fully aware for years while treated as a vegetable. Or the man who recounted being frozen, terrified and helpless as a doctor asked his wife if they should withdraw life support and let him die.

Magazine editor Jean-Dominique Bauby, who suffered a permanent version of the condition, used a system of eye blinks to write the book The Diving Bell and the Butterfly. What more could he have done given a mind decoder?

Each mind is unique, so the system developed by Huth and his team only works after being trained for hours on a single person. You cant aim it at a someone new and learn anything, at least for now, Huth and collaborator Jerry Tang explained last week in a press event leading up to a publication of their work in Mondays Nature Neuroscience.

And yet their advance opens prospects that are both scary and enticing: A better understanding of the workings of our brains, a new window into mental illness, and maybe a way for us to know our own minds. Balanced against that is the concern that one day such technology may not require an individuals consent, allowing it to invade the last refuge of human privacy.

Huth, who is an assistant professor at the University of Texas, was one of the first test subjects. He and two volunteers had to remain motionless for a total of 16 hours each in a functional MRI, which tracks brain activity through the flow of oxygenated blood, listening to stories from The Moth Radio Hour and the Modern Love podcast, chosen because they tend to be enjoyable and engaging.

This trained the system, which produced a model for predicting patterns of brain activity associated with different sequences of words. Then there was a trial-and-error period, during which the model was used to reconstruct new stories from the subjects brain scans, harnessing the power of a version of ChatGPT to predict which word would likely follow from another.

Eventually the system was able to read brain scan data to decipher the gist of what the volunteers had been hearing. When the subjects heard, I dont have my drivers license yet, the system came up with, she has not even started to learn to drive. For some reason, Huth explained, its bad with pronouns, unable to figure out who did what to whom.

Weirder still, the subjects were shown videos with no sound, and the system could make inferences about what they were seeing. In one, a character kicked down another, and the system used the brain scan to come up with, he knocked me to the ground. The pronouns seemed scrambled, but the action was spookily on target.

The people in the scanner might never have been thinking in words at all. Were definitely getting at something deeper than language, Tang said. Theres a lot more information in brain data than we initially thought.

This isnt a rogue lab doing mad science but part of a long-term effort thats been pursued by scientists around the world. In a 2021 The New Yorker article, researchers described projects leading up to this breakthrough. One shared a vision of a Silicon Valley-funded endeavor that could streamline the cumbersome functional MRI scanner into a wearable thinking hat. People would wear the hat, along with sensors, to record their surroundings to decode their inner worlds and mind meld with others even perhaps communicate with other species. The recent breakthroughs make this future seem closer.For something thats never existed, mind reading seems to crop up regularly in popular culture, often reflecting a desire for lost or never-realized connection, as Gordon Lightfoot sang of in If You Could Read my Mind. We envy the Vulcans their capacity for mind melding.

Historical precedent, however, warns that people can do harm by simply taking advantage of the belief that they have a mind-reading technology just as authorities have manipulated juries, crime suspects, job candidates and others with the belief that a polygraph is an accurate lie detector. Scientific reviews have shown that the polygraph does not work as people think it does. But then, scientific studies have shown our brains dont work the way we think they do either.So, the important work of giving voice back to people whose voices have been lost to illness or injury must be undertaken with deep thought for ethical considerations; and an awareness of the many ways in which that work can be subverted. Already theres a whole field of neuroethics, and experts have evaluated the use of earlier, less effective versions of this technology. But this breakthrough alone warrants a new focus. Should doctors or family members be allowed to use systems such as Huths to attempt to ask about a paralyzed patients desire to live or die? What if it reports back that the person chose death? What if it misunderstood?

Faye Flam is a Bloomberg Opinion columnist covering science. She is host of the Follow the Science podcast.

Continue reading here:
AI Is Helping Us Read Minds, But Should We? - HT Tech

Google intensifies its bid to lead AI race, announces Google DeepMind: Heres what it means – The Indian Express

AI will be as good or as evil as human nature allows, said the CEO of Google, Sundar Pichai, in a recent interview with 60 Minutes. Pichai said that the revolution was coming faster than one might think. And, keeping up with the fast pace, Google on Thursday announced Google DeepMind. It is Alphabet Incs latest bid to enhance its research and development in artificial intelligence.

DeepMind and the Brain team from Google Research have united as Google DeepMind. The union of the two entities is aimed at accelerating the progress towards a world where AI can help solve the biggest challenges facing humanity.

Together, in close collaboration with our fantastic colleagues across the Google Product Areas, we have a real opportunity to deliver AI research and products that dramatically improve the lives of billions of people, transform industries, advance science, and serve diverse communities, said DeepMind CEO Demis Hassabis in an official release.

On the other hand, Pichai said that progress has been faster than ever before. To ensure the bold and responsible development of general AI, were creating a unit that will help us build more capable systems more safely and responsibly. This group, called Google DeepMind, will bring together two leading research groups in the AI field: the Brain team from Google Research, and DeepMind, Pichai said in his statement.

According to the Google boss, the collective accomplishments in AI over the last decade comprise AlphaGo, Transformers, word2vec, WaveNet, AlphaFold, sequence to sequence models, distillation, deep reinforcement learning, and distributed systems and software frameworks like TensorFlow and JAX for expressing, training and deploying large scale ML models.

DeepMind, an AI research start-up, was founded by AI researchers Demis Hassabis, Shane Legg, and Mustafa Syleymann in 2010. In 2014, it was reported that Facebook was in talks to acquire the company. However, Google bought the company for over $500 million. Around the time of the acquisition, it was seen as a move by Google that would bring itself to the forefront of deep learning, giving it an edge over its competitors.

DeepMind developed machine learning systems that used deep neural networks and models inspired by neuroscience. The company applies general-purpose learning algorithms to large data sets to train its system and predict outcomes. It illustrates the vast potential of machine learning and how it can advance AI. It is believed that the principles applied by DeepMind can also be used by companies to enhance their efficiency in various areas.

Google Brain is essentially an AI research team that works at Google AI. The dedicated division for AI research came into shape in 2011 with the collaboration between Google fellow Jeff Dean, Stanford University professor Andrew Ng, and Google researcher Greg Corrado.

Reportedly, Google Brain was conceived to develop deep learning processes with existing infrastructure. The company fuses machine learning research, large-scale computing resources, and information technology. TensorFlow, the open-source software library, is a notable technology from Google Brain that facilitates neural networks to be accessed by the public along with numerous AI internal projects. Google Brain aims to create research avenues in natural language processing and machine learning.

According to Pichai, combining the talents from Google Brain and DeepMind into one focused team backed by computational resources will significantly accelerate the companys progress in AI. Pichai, in the official release, also stated that Google was the first AI company since 2016 and that it has strived to improve its AI products be it in Search, Gmail, YouTube, or camera in Pixel phones. His words reflected the search giants deeper commitment to innovations in AI.

Hassabis feels that Google DeepMind will help in building more capable general AI safely and responsibly. The new entity will bring together world-class talent in AI, computing power, resources, and infrastructure to bolster the next generation of AI developments for products across Google and Alphabet Inc.

The research advances from the phenomenal Brain and DeepMind teams laid much of the foundations of the current AI industry, from Deep Reinforcement Learning to Transformers, and the work we are going to be doing now as part of this new combined unit will create the next wave of world-changing breakthroughs, said Hassabis.

First published on: 21-04-2023 at 14:09 IST

See the original post here:
Google intensifies its bid to lead AI race, announces Google DeepMind: Heres what it means - The Indian Express

Elon Musk Ramps Up A.I. Efforts, Even as He Warns of Dangers – The New York Times

But as OpenAI began building the technology that would result in ChatGPT, many at the lab realized that openly sharing its software could be dangerous. Using A.I., individuals and organizations can potentially generate and distribute false information more quickly and efficiently than they otherwise could. Many OpenAI employees said the lab should keep some of its ideas and code from the public.

In 2018, Mr. Musk resigned from OpenAIs board, partly because of his growing conflict of interest with the organization, two people familiar with the matter said. By then, he was building his own A.I. project at Tesla Autopilot, the driver-assistance technology that automatically steers, accelerates and brakes cars on highways. To do so, he poached a key employee from OpenAI.

In a recent interview, Mr. Altman declined to discuss Mr. Musk specifically, but said Mr. Musks breakup with OpenAI was one of many splits at the company over the years.

There is disagreement, mistrust, egos, Mr. Altman said. The closer people are to being pointed in the same direction, the more contentious the disagreements are. You see this in sects and religious orders. There are bitter fights between the closest people.

After ChatGPT debuted in November, Mr. Musk grew increasingly critical of OpenAI. We dont want this to be sort of a profit-maximizing demon from hell, you know, he said during an interview last week with Tucker Carlson, the former Fox News host.

Mr. Musk renewed his complaints that A.I. was dangerous and accelerated his own efforts to build it. At a Tesla investor event last month, he called for regulators to protect society from A.I., even though his car company has used A.I. systems to push the boundaries of self-driving technologies that have been involved in fatal crashes.

That same day, Mr. Musk suggested in a tweet that Twitter would use its own data to train technology along the lines of ChatGPT. Twitter has hired two researchers from DeepMind, two people familiar with the hiring said. The Information and Insider earlier reported details of the hires and Twitters A.I. efforts.

Originally posted here:
Elon Musk Ramps Up A.I. Efforts, Even as He Warns of Dangers - The New York Times