Page 673«..1020..672673674675..680690..»

Episode 9: Frontiers of AI with Aaron Erickson of NVIDIA – Foley & Lardner LLP

In our ninth episode, Aaron Erickson from NVIDIA joins Natasha Allen to share his perspectives on where generative artificial intelligence is heading. How are LLMs evolving beyond natural language to incorporate multimodal inputs e.g. images, sound, video, or even taste and smell? What are some of the new economic models being developed for AI tools? And are we primed to see a trend towards bespoke models designed for specific organizations?

Go Deeper:

The below episode transcript has been edited for clarity.

Natasha Allen

Thank you for joining us for another episode of the Innovative Technology Insights podcast. My name is Natasha Allen and I am a corporate partner in the San Francisco and Silicon Valley offices at Foley and Lardner. Im also the co-chair of the AI sub-sector, which allows me to have this cool opportunity to interview exciting individuals working on cutting edge technologies, including AI. I have with me today Aaron Erickson. Aaron is a key member of the software engineering team at NVIDIA, helping to build their enterprise DGX AI platform. Prior to NVIDIA, Aaron spent 30 years working in leadership roles, most recently as CEO of Org Space and a VP of Engineering at New Relic. Over the course of his entire career, Aaron has been an advocate for building better software from his home in San Francisco. Welcome, Aaron.

Aaron Erickson

Thanks for having me.

Natasha Allen

Alright, so lets get started. With the advent of ChatGPT came a better understanding of the world of AI. Prior to this, layperson individuals didnt really have an understanding of what AI was or what it could do - in particular what large language models (LLMs) could do, which, as youre aware, are the foundation for deep learning algorithms that help us understand and process natural language. As of now, we understand that LLMs can be used for text, specifically to help generate or prompt certain text. But how do you think LLMs will evolve beyond natural language, incorporating multi-modal inputs such as images, sounds, video, or even taste?

Aaron Erickson

It is one of the most exciting developments, I think. People are seeing this happen even now. The latest ChatGPT release, if you have the premium version, you can talk to it and it talks back in voice to you. In fact, one of the scariest things that people saw with initially was looking at CAPTCHA. If anybodys ever used a CAPTCHA where you have to say which of these things on the screen is a traffic light or not ChatGPT can pretty much do that even today.

Multimodal is, I think, one of the really interesting places its going. One good example of this is, you think about Tesla and what theyre able to do with video. When you drive a Tesla, its almost like you have a kind of permanent dashcam on the car. Its always taking pictures and its a big part of how we can drive independently or autonomously when you put it in that mode. But one of the really interesting things is how theyre using all that video that theyve accumulated. Google is also doing this with the models theyre developing, and a lot of these new models are taking video and able to transcribe it to understand what each frame of the picture means. This is in order to really get at what could we learn if we could see the world around us?

The best way to think about it in a pre-multimodal LLMs is imagining you trained a human or took them to school, but the only thing they could do in school was read books or papers. They couldnt actually experience the world as it is, be it through seeing the world, or in some of the more speculative applications Im hearing about, even smell the world or taste the world, or understand different aspects and be able to code it into a training model. At the end of the day, you almost kind of have your inner monologue where you think about language. You look at a light and it translates into the word light and then youre able to associate that to when I have a light on, I can read. That same kind of trick that we do as humans, we are now able to do with these kinds of biologic or synthetic brains that weve created.

Natasha Allen

Wow, thats amazing. With the advent of this technology what are some of the key challenges you think to integrating multimodal inputs into LLMs, especially when it comes to sensory, taste and smell, and those things that you think are inherently human?

Aaron Erickson

The first obvious sense is sight, thinking of being able to look at pictures, which weve been doing for years, and being able to look at video, which is a bunch of pictures and a lot of processing power, but its a known quantity. Its not particularly groundbreaking at this point. Other senses, like being able to taste and smell, we have ways we can, through understanding the chemical composition of a substance, infer what it might smell or taste like. If you can digitize something, and these are all things that are digitizable, its just a matter of some of those other kinds of senses being a little more expensive or more involved. But theres no reason it couldnt be done. I used to say this, almost kiddingly are we going to have generative smell next? Im not sure its something we want as society, but theres no reason why it couldnt happen. Its just a matter of translating it into some digital form and then allowing the model to learn from it.

And so what is multimodal? Theres what the inputs are, what it learns from. Theres also what it can output. We think about generative AI and ChatGPT and smart technologies as generating text and pictures, and maybe generating videos someday. Theres early models that generate videos where you just type in I would like to see blah happen and a video is generated that does that. Its a little rough right now, but the models are getting better and the ones you see today are the worst that youll ever see. Theyre only going to get better from here.

One of the things that we do at NVIDIA is we have a product called BioNeMo and this is helping do generative drug discovery. How do you do that? If you think about it, its another kind of data and youre able to literally use the same kind of technology to generate what ideal protein strands might be, maybe a cure for COVID, or whatever it is that you want to solve. Those kinds of applications are happening today. And research is being accelerated, think about the kinds of medications well be able to get or the kinds of things well be able to do with this technology, like make power plants more efficient. I know one startup that has demonstrated making a power plant 90% more efficient by using this kind of technology, helping figure out what would be some ways to make a plant more efficient that arent obvious to a human.

Natasha Allen

I think itd be so amazing to see that evolution. We have the ways that we can expand the use of LLMs, now how do you monetize it? How do you monetize these AI tools? Maybe you have some examples, or can you can explain what are some of the current economic models prevalent in the AI industry?

Aaron Erickson

You wonder why some of these startups are raising what seem like ridiculous seed rounds, I heard of one that was like US$50 million or US$100 million. And largely, theyre not just hiring employees - youd actually be surprised by how few employees a lot of these firms are hiring. But largely theyre buying infrastructure. Either from us [NVIDIA] or from other providers of GPU technology, a lot of the money is going to go into that. And those are people developing foundational models. Thats your OpenAIs and Anthropics of the world. As well as some of the big FAANG companies, theyre all doing some version of this too. Its just an arms race right now.

The more interesting thing is, and this is what I think a lot of people miss, is that there are a lot of what I call narrow AI solutions that you can build. They might be good at a specific problem. One good example of this, and you dont need a massive array of GPUs to do this, you could develop a reasonable model that detects fraud.

Say you want to run your fraud detection department to find people that are fraudulently putting in expense reports or doing some other thing that was anomalous. You can build an LLM that is very narrowly trained to detect fraud and actually run that against your accounting system or run that against a set of accounting systems, an email, or something like that to be an early warning indicator. That doesnt require the same kind of GPU fleet that you would need for building a foundational model. You could take something like Llama or one of the open source models, enhance it or fine tune it with data about how fraud happens or examples of bad examples of fraud in your company, and then build some pretty incredible products. This could be done internally in companies, building some pretty incredible capabilities using these things to automate certain kinds of routine intellectual work. Thats just one example, but its hard to go into a company and not find maybe 20 or 30 other examples like that where a narrow AI can do incredibly useful things that are far more valuable than what youll spend developing it.

Natasha Allen

Interesting. So maybe elaborate on the concept of bespoke LLMs. Is this a new thing that executives could ask for - I want my own LLM tailored to myself. What would make a bespoke LLM different from an off the shelf AI model?

Aaron Erickson

Why would you develop a bespoke LLM in the first place is a good question. Because right now if you use this ChatGPT or whatever, its going to tell you general facts about the world. Its not going to give you your sales forecast. It doesnt know data about that. It might say, well, you know this company is doing pretty good and maybe even have it attached to Bloomberg and maybe it will tell you a little more about your company. But it can never know as much as you would know inside the company. This includes your private data, your accounting system, your HR system, and every kind of every digital footprint in your organization thats specific to you. Thats just one of the obvious reasons to build your own LLM, to have a training model that, as a CEO, can tell you anything you need to know about your company it can give you advice on your strategy, almost become like a McKinsey consultant in a box. Even if you have other people doing that, having something like this where you can iteratively ask questions and do so whenever you need, whenever the thought comes by, is pretty useful.

Then you think about, how is it different? I think its kind of like how somebody whos been in your company for 10 years might be slightly more valuable than somebody thats only been there a week. Institutional knowledge, right?

A person thats been around the company for 10 years has a lot of tacit knowledge about everything from how people talk, what the company culture is, what kinds of things are inside or outside the opportune window, if things ought to be discussed or not, ideas that have been tried, things that the organization as a whole has learned. It would be a shame to have LLMs and not be able to take advantage of these things that we learn inside our companies over time.

These models get more and more useful the more you can train them on the context of your organization. And this is the part thats incredible, and why were seeing a lot of demand for these kinds of systems, is that CEOs dont know the limits. Right now the more horsepower you put into one of these things, the smarter they get. It starts to become an economic imperative to have the smartest AI model. One smarter than your competitors. One trained on better data, one trained on more GPUs, youre effectively making it smarter and that can be a competitive advantage over time. Especially as we start to get closer to artificial general intelligence, which even Sam Altman and other people are saying is a lot closer than we think. It might be before 2030 that we get this kind of broader concept of a generally intelligent LLM thats better than most humans. Its pretty incredible to think about. I dont go a day where I dont think of another application, so it could be used in one of these contexts.

Natasha Allen

With that in mind, two questions. The first one is, if organizations decide to go down this bespoke LLM route, what is some advice or considerations you could offer them to help them navigate?

Aaron Erickson

One of the first pieces of advice, and this kind of goes for anybody using one of these tools, it is very easy to treat these like humans and think theyre human and to apply human characteristics to it. I think thats a mistake. These are not humans. These are machines, but these are machines that sometimes are wrong in the same way a human might be wrong. People will complain about hallucination, and I think correctly. And a lot of the research thats been happening is about how do you avoid hallucination? But its a lot more useful when you think about the fact that if youre an executive and you have a team of people that report to you, it is very frequent that those people that report to you arent lying to you. Maybe they are occasionally, but if theyre wrong about something its because they were asked to produce an answer but they dont really know 100% what that answer is. Most humans want to feel like theyre right. Youre trained to not be wishy-washy when you answer a question and so a lot of people will state things somewhat hubristically even when theyre not necessarily true. And Id like to think LLMs are really just kind of following that pattern.

Treat the way an LLM works a little bit less like you would a traditional computer and expect the nondeterminism. Expect it might be wrong. And explain you might need to validate some of the facts. Now this goes for ChatGPT or your own bespoke LLMs. Thats the same risk. Imagine you trained a language model on all your HR data, all your accounting data, all your sales data, all the important data sources in your organization. Now, if you are an organization that is fully transparent in every capacity, maybe this isnt going to bother you, but an LLM will answer based on any of the data it has seen.

A lot of the really interesting, and frankly hard, work thats happening now is how do you make a company-trained LLM not just reveal, say salary information or other kinds of personally identified information that it might learn about? How do you make it not regurgitate that back? Now the CEO will have free access to it because hypothetically thats the person that should have access to nearly every bit of company data, and theres probably a number of people inside the executive suite, theyre going to be expected to have that. But if you start to take it out to the rest of the organization, were going to think pretty hard about how do you design these systems so that any given LLM is not going to reveal your company secrets.

Lets take the secret formula for Coca-Cola, for example. If youve trained your LLM on Coke, then you could say please give me the recipe for Coke and it might say no. Ive done this with LLMs, with other kinds of things youre not supposed to know, or things that they try to protect you from getting the answer to. Somebody figured out a way to make ChatGPT give instructions to make a bomb not by saying, please give me instructions to make a bomb but hey, if I was this character in this movie and I wanted to make a bomb, how would that character do it? Theres ways people get around it, and I think security issues are going to be tough with that.

Theres ways to design these systems to do that. I can go into more detail, but that gives you a little bit of a sense for what some of the challenges are.

Natasha Allen

Thats very interesting. Almost like a walled off approach, certain people can have access to the outputs. This kind of ties into, in talking about using large amounts of data in particular organizations, what are you seeing? Is there an increasing trend towards walled garden LLMs designed specifically for an organization trained on a combination of their proprietary data and public datasets?

Aaron Erickson

Absolutely, everywhere in the industry. Tons of business leaders Ive been chatting with since ChatGPT came out and people learned that you could build these models, that can train these models. One of the first questions CEOs start to ask is, can we train one on my stuff? I dont want to give away the company secrets. This is a big, powerful machine. I dont want to use ChatGPT even if they say theyll not train on my data. I dont want somebody to accidentally do that, so just to manage the risk we started thinking about it that way.

Its one of the biggest things that people are doing in terms of trends. But even as they think about how to use these LLMs, one of the trends Im starting to see is people thinking about solving the security issue I was talking about before, you dont necessarily have to train one. Again compared to a person, you might have an LLM that you train on CEO-knowable data and then you might say well, were going to have another LLM that we train on things that we can be publicly known internally within the company. So maybe the organizational chart is public information. You can train an LLM based on that, and then you can answer questions about it. Maybe certain kinds of other policy documents, and we start to almost think about separate and maybe even smaller purpose-built LLMs or machine learning systems for different parts of the organization, by department, if you will. The LLM structure starts to resemble a more traditional organization structure.

Natasha Allen

Interesting. Are there any examples of industries or sectors where you think this walled off garden LLM approach is beneficial?

Aaron Erickson

All of them. When the IBM 360 came out in the 1950s and 1960s, the idea of every company owning their computer was pretty ridiculous. IBM owned the computer and everybody shared it. You had to be a pretty big company to own your own early on, hence the whole time-sharing thing. Over time, it became extraordinarily commonplace for companies to have their own computers. People started building their own networks within them, and then we eventually ended up with the PC. I think youre going to see the same thing with machine learning systems.

Right now, because the economics to build a viable LLM are so difficult, if you want to train GPT5, youre probably going to need over a billion dollars to do that. But the cost is going to come down, I think pretty radically over the next three to five years. The techniques to do custom elements, the raw technique, rank adaptation, and others that are becoming more commonplace, I think theyre only going to progress. Between that and theres so much capital going into different kinds of AI startups that therell be a solution that you could probably own within your four walls for just about any kind of problem you can imagine. Theres going to be a marketplace bigger than what most people think thats coming with this stuff.

Natasha Allen

Thats great. And I agree. I think it could be used across many organizations, probably the ones with larger datasets, right? Last couple questions - walled off garden LLMs, do you think they will replace enterprise software or do you think the two will coexist and be complementary to one another?

Aaron Erickson

I think there are classes of enterprise software it will replace and new kinds of enterprise software will emerge.

Natasha Allen

I agree.

Aaron Erickson

Its going to be like every other revolution. That stuff will still be around. You can still buy enterprise software for doing sales data, that was around before Salesforce. You could probably still build systems like that or use systems like that. They wont entirely go away, but the state of the art will certainly be systems that just tell you the answer to the question you have, and then maybe let you interact with data in some way that makes sense for a human. The idea that you have to be trained to use a SaaS product I think will largely go away. Any SaaS product where you have to be trained as a human to do it is probably more complicated than the LLM version thats going to be a combination of a chatbot plus some sort of model that you can interact with that helps you understand a solution to a problem or help you model something.

Natasha Allen

Final question - if organizations are thinking about offering or adopting walled garden LLMs, what steps do you think they should take to maximize the benefits while trying to minimize the risks, some of which you already alluded to before with regards to hallucination?

Aaron Erickson

I think its very easy to just expect it to be magic and so I think tempering expectations a little bit. I think theres a lot of really great experiments you can run without making the big upfront investment. One of the most powerful things about using OpenAIs API or some of these other organizations APIs is that youre able to experiment with whats possible. Youre able to explore the art of the possible, which helps you understand, okay, well, if we can do the small thing really, really well and do it on somebody elses LLM, you understand how your own data would help with that decision. Thats how you start to build the economic case for doing this stuff. Thats where some of the value is going to come in. I think itd be a mistake for most organizations to say, oh, I need to build my own GPT5. Some will do it, right? I can see in the next five years, some companies saying, hey, I need to have the smartest one of these. I want to compete that way. Not against OpenAI, but one car company versus another car company. I think theres a lot of big investments like that. I also think, like any other industry, theres going to be a tremendous amount of waste in terms of people either not understanding what theyre capable of or expecting exact answers and having no tolerance for data even being slightly wrong, which kind of misunderstands what an LLM is capable of. Even the best GPT5 will probably still hallucinate from time to time, and well still need a human in the loop for anything thats life critical. I think those are some of the key ones.

Like I talked about with security, Ill say again that being very aware of what data a given LLM has been trained on is critical. And starting to think about well, okay, maybe we need more than one, maybe we need dozens of them, just like we might have dozens of employees that understand their domain really well. And in fact, we train them on that narrative, and you might actually save money and not have to develop very, very expensive general LLMs. You can train them on their very specific function and almost create coordinator LLMs, just like your senior vice president that coordinates the activities of the subordinates - creating your own organizational structure out of synthetic brains not the biologic ones.

Natasha Allen

Very interesting. Well, that was my last question. I think this was a very insightful discussion. Appreciate you taking the time to talk to us and chat about some of these more cutting-edge decisions to be made when youre dealing with LLMs, the AI tools, and what the next frontier may be. Thank you everyone for joining us, and until next time.

Foley & Lardners Innovative Technology Insights podcast focuses on the wide-ranging innovations shaping todays business, regulatory, and scientific landscape. With guest speakers who work in a diverse set of fields, from artificial intelligence to genomics, our discussions examine not only on the legal implications of these changes but also on the impact they will have on our daily lives.

Continue reading here:

Episode 9: Frontiers of AI with Aaron Erickson of NVIDIA - Foley & Lardner LLP

Read More..

The emergence of Generative AI presents boards with a challenge. – Forbes

Randy Bean, Heidi Lanford, and Ash Gupta

The emergence of Generative AI presents corporate boards of directors with a present-day challenge. Will Generative AI disrupt companies and entire industries? Some estimates have indicated that Generative AI will automate over 40 percent of business tasks and create business value worth more than $400 billion. The public version of Chat GPT, an application of Generative AI, created over 100 million users in a few weeks of its release. The potential impact extends to job displacement. What if a large majority of white-collar tasks can be performed more effectively with AI?

I recently attended the Wall Street Journal Tech Live event, and wrote about Artificial General Intelligence (AGI) And The Coming Wave. At the WSJ event, venture investor Vinod Khosla forecast that, AI will be able to replace 80% of 80% of all jobs within 10-20 years. Author and AI pioneer Mustafa Suleyman noted, Within the next few years, AI will become as ubiquitous as the Internet, and asked, Will AI unlock secrets of the universe or create systems beyond our control?

What is the responsibility of corporate boards when it comes to Generative AI? Are corporate board members sufficiently equipped to consider the opportunities as well as risks, and guide corporations through their shareholder and stakeholder responsibilities? While Generative AI has the potential to revolutionize the way we do business, there is equal potential for good or harm, at scale. These are the risk and reward factors that corporate board members must consider.

Generative AI and the responsibilities of corporate boards of directors was the topic of discussion at a meeting on November 1 of the New York chapter of the National Center of Corporate Directors (NACD). The discussion was hosted and moderated by Ash Gupta, the former and longtime Global President of Risk and Information Management for American Express. I was a guest panelist along with Heidi Lanford, the former Global Chief Data Officer for Fitch Group, which is comprised of Fitch Ratings and Fitch Ventures, and is wholly owned by the Hearst Corporation. The NACD discussion focused on the steps and actions that corporate boards must undertake to safely embrace Generative AI. These include:

1. Strategic implications of bringing AI into the corporation

2. The role of boards as enablers

3. Legal, fairness, and transparency considerations

4. Monitoring, learning, and accelerating progress.

Potential risk for any company will depend upon the nature of the business problem that Generative AI is being used to solve. Examples include creating operating efficiencies, enhancing customer cross-sell, improving risk management, or driving product and servicing innovation. The recommended course of action will be dependent upon factors including industry regulation, skill sets of the organization, and whether safeguards have been put in place to mitigate potential risks. Heidi Lanford notes, Monitoring and governance is needed. However, for use cases on the offense side of AI, I prefer to set up guardrails as opposed to heavy handed governance.

How prepared are corporate board members for Generative AI? Author Tom Davenport, in a recent Forbes article, Are Boards Kidding Themselves About Generative AI?, raises a warning flag. Davenport notes that 67% of board members interviewed for a recent industry survey characterized their knowledge of Generative AI as expert (28%) or advanced (39%). Davenport expresses his skepticism, noting, This level of expertise among board members seems rather unlikely. I doubt that 29% of even formally trained computer scientists fully understand the underlying transformer models that make generative AI work. I have been studying them for a couple of years now, and I wouldnt put myself in the expert category. Board directors may want to take heed.

The limitations of board understanding may not be unique to the current experience of boards with Generative AI. A recent Wall Street Journal article was headlined, Boards Still Lack Cybersecurity Expertise, noting that Just 12% of S&P 500 companies have board directors with relevant cyber credentials, referencing a November 2022 WSJ research study showing that just 86 of 4,621 board directors in S&P 500 companies had relevant experience in cybersecurity. It would be expected that with the newness of Generative AI, the level of relevant experience would be even lower.

One solution may be to recruit new corporate board members who possess skills in this area. Inderpal Bhandari, who previously served as Chief Data and Analytics Officer for IBM, recently joined the board of directors of Walgreens Boots Alliance. Bhandari notes, Cybersecurity threats and technology-driven reinvention of business models and products, are the needs of the day. Todays board must possess not just tech-savvy but perhaps even a technical instinct to provide effective governance. He adds, While well-established education opportunities for cybersecurity are readily available for board directors, that is not the case for strategic technologies such as data and AI. There is an urgent need to boost board literacy in that direction.

Ash Gupta suggests, It will be the responsibility of corporate board members to understand each companys readiness to leverage generative AI in ways that create competitive excellence, as well as mitigate business and stakeholder risk. He notes that while over 95 percent of board members believe in the need for AI, just 28% of companies have made realistic progress. He continues, Boards require personal commitment to developing a deep understanding of how GenAI works, how it can revolutionize the company, and perhaps most important what it cannot do. Gupta adds, This commitment must be both a one-time formal education and ongoing learning.

To this end, Gupta outlines a series of steps that companies undertake to prepare corporate boards for a Generative AI future. These steps include:

1. Create critical training for the board and corporate leadership so they have an educated understanding of what is possible and what are the limitations of Generative AI.

2. Create a test and learn culture that recognizes that many ideas that initially look promising may not be of best use to the organization.

3. Think through how best to extend the knowledge of company teams through external collaborations. These might include sources of data relevant to your industry, control mechanisms, talent, and tools.

4. Make it a priority to discuss progress and update no less frequently than every other board meeting.

5. Keep track of how industry leading companies are employing Generative AI.

Gupta and Lanford agree that corporations and their board members must remain vigilant. Lanford cautions, AI must be a team sport. Boards should see evidence of broad participation. Expect that AI ideas are being solicited from across the workforce, and not just the technical experts. And while there may be broad agreement on the need for regulation of Generative AI, it has been noted that while technology evolves week by week, legislation often takes years. Gupta adds, 'As a CEO and a Board Member, delegate to your CDO, CAO or CIO, but do not abdicate your authority.

Lanford concludes, Boards can ensure that there is a greater chance of success with Generative AI if there is a culture of experimentation and failure, which balances how use cases are moved into production. Gupta echoes this sentiment, commenting, Catastrophic mishaps can happen if your people and processes are not adequately trained. Effective implementation will require both technical and leadership understanding. He adds, Most likely, early ideas will not produce the desired outcomes. A deep commitment to creating a test-and-learn culture will!

See more here:

The emergence of Generative AI presents boards with a challenge. - Forbes

Read More..

Google AI Researchers Found Something Their Bosses Might Not … – Futurism

In a new paper, a trio of Google DeepMind researchers discovered something about AI models that may hamstring their employer's plans for more advanced AIs.

Written by DeepMind researchers Steve Yadlowsky, Lyric Doshi and Nilesh Tripuraneni, the not-yet-peer-reviewed paper breaks down what a lot of people have observed in recent months: that today's AI models are not very good at coming up with outputs outside of their training data.

The paper, centered around OpenAI's GPT-2 which, yes, is two versions behind the more current one focuses on what are known as transformer models, which as their name suggests, are AI models that transform one type of input into a different type of output.

The "T" in OpenAI's GPT architecture stands for "transformer," and this type of model, which was first theorized by a group of researchers including other DeepMind employees in a 2017 paper titled "Attention Is All You Need," is often considered to be what could lead to artificial general intelligence (AGI), or human-level AI, because as the reasoning goes, it's a type of system that allows machines to undergo intuitive "thinking" like our own.

While the promise of transformers is substantial an AI model that can make leaps beyond its training data would, in fact, be amazing when it comes to GPT-2, at least, there's still much to be desired.

"When presented with tasks or functions which are out-of-domain of their pre-training data, we demonstrate various failure modes of transformers and degradation of their generalization for even simple extrapolation tasks," Yadlowsky, Doshi and Tripuranemi explain.

Translation: if a transformer model isn't trained on data related to what you're asking it to do, even if the task at hand is simple, it's probably not going to be able to do it.

You would be forgiven, however, for thinking otherwise given the seemingly ginormous training datasets used to build out OpenAI's GPT large language models (LLMs), which indeed are very impressive. Like a child sent to the most expensive and highest-rated pre-schools, those models have had so much knowledge crammed into themthat there isn't a whole lot they haven't been trained on.

Of course, there are caveats. GPT-2 is ancient history at this point, and maybe there's some sort of emergent property in AI where withenough training data, it starts to make connections outside that information. Or maybe clever researchers will come up with a new approach that transcends the limitations of the current paradigm.

Still, the bones of the finding are sobering for the most sizzling AI hype. At its core, the paper seems to be arguing, today's best approach is still only nimble on topics that it's been thoroughly trained on meaning that, for now at least, AI is only impressive when it's leaning on the expertise of the humans whose work was used to train it.

Since the release of ChatGPT last year, which was built on the GPT framework, pragmatists have urged people to temper their AI expectations and pause their AGI presumptions but caution is way less sexy than CEOs seeing dollar signs and soothsayers claiming AI sentience. Along the way, even the most erudite researchers seem to have developed differing ideas about how smart best current LLMs really are, with some buying into the belief that AIis becoming capable of the kinds of leaps in thought that, for now, separates humans from machines.

Those warnings, which are now backed by research, appear to not have quite reached the ears of OpenAI CEO Sam Altman and Microsoft CEO Satya Nadella, who touted to investors this week that they plan to "build AGI together."

Google DeepMind certainly isn't exempt from this kind of prophesying, either.

In a podcast interview last month, DeepMind cofounder Shane Legg said he thinks there's a 50 percent chance AGI will be achieved by the year 2028 a belief he's held for more than a decade now.

"There is no one thing that would do it, because I think that's the nature of it," Legg told tech podcaster Dwarkesh Patel. "It's about general intelligence. So I'd have to make sure [an AI system] could do lots and lots of different things and it didn't have a gap."

But considering that three DeepMind employees have now found that transformer models don't appear to be able to do much of anything that they're not trained to know about, it seems like that coinflip just might not fall in favor of their boss.

More on AGI: OpenAI's Chief Scientist Worried AGI Will Treat Us Like Animals

Read more from the original source:

Google AI Researchers Found Something Their Bosses Might Not ... - Futurism

Read More..

Humanoid Robots Are Here, But They’re A Little Awkward – Jamestown Post Journal

CORRECTS SPELLING OF LAST NAME TO REHER INSTEAD OF REHE - AI engineer Jenna Reher works on humanoid robot Figure 01 at Figure AI's test facility in Sunnyvale, Calif., Tuesday, Oct. 3, 2023. Figure plans to start with a relatively simple use case, such as in a retail warehouse, but aims for a commercial robot that can be "iterated on like an iPhone" to perform multiple tasks to take up the work of humans as birth rates decline around the world. (AP Photo/Jae C. Hong)

Building a robot thats both human-like and useful is a decades-old engineering dream inspired by popular science fiction.

While the latest artificial intelligence craze has sparked another wave of investments in the quest to build a humanoid, most of the current prototypes are clumsy and impractical, looking better in staged performances than in real life. That hasnt stopped a handful of startups from keeping at it.

The intention is not to start from the beginning and say, Hey, were trying to make a robot look like a person,' said Jonathan Hurst, co-founder and chief robot officer at Agility Robotics. Were trying to make robots that can operate in human spaces.

Do we even need humanoids? Hurst makes a point of describing Agilitys warehouse robot Digit as human-centric, not humanoid, a distinction meant to emphasize what it does over what its trying to be.

What it does, for now, is pick up tote bins and move them. Amazon announced in October it will begin testing Digits for use in its warehouses, and Agility opened an Oregon factory in September to mass produce them.

Digit has a head containing cameras, other sensors and animated eyes, and a torso that essentially works as its engine. It has two arms and two legs, but its legs are more bird-like than human, with an inverted knees appearance that resembles so-called digitigrade animals such as birds, cats and dogs that walk on their toes rather than on flat feet.

Rival robot-makers, like Figure AI, are taking a more purist approach on the idea that only true humanoids can effectively navigate workplaces, homes and a society built for humans. Figure also plans to start with a relatively simple use case, such as in a retail warehouse, but aims for a commercial robot that can be iterated on like an iPhone to perform multiple tasks to take up the work of humans as birth rates decline around the world.

Theres not enough people doing these jobs, so the markets massive, said Figure AI CEO Brett Adcock. If we can just get humanoids to do work that humans are not wanting to do because theres a shortfall of humans, we can sell millions of humanoids, billions maybe.

At the moment, however, Adcocks firm doesnt have a prototype thats ready for market. Founded just over a year ago and after having raised tens of millions of dollars, it recently revealed a 38-second video of Figure walking through its test facility in Sunnyvale, California.

Tesla CEO Elon Musk is also trying to build a humanoid, called Optimus, through the electric car-makers robotics division, but a hyped-up live demonstration last year of the robots awkwardly halting steps didnt impress experts in the robotics field. Seemingly farther along is Teslas Austin, Texas-based neighbor Apptronik, which unveiled its Apollo humanoid in an August video demonstration.

All the attention and money poured into making ungainly humanoid machines might make the whole enterprise seem like a futile hobby for wealthy technologists, but for some pioneers of legged robots its all about what you learn along the way.

Not only about their design and operation, but also about how people respond to them, and about the critical underlying technologies for mobility, dexterity, perception and intelligence, said Marc Raibert, the co-founder of Boston Dynamics, best known for its dog-like robots named Spot.

Raibert said sometimes the path of development is not along a straight line. Boston Dynamics, now a subsidiary of carmaker Hyundai, experimented with building a humanoid that could handle boxes.

That led to development of a new robot that was not really a humanoid, but had several characteristics of a humanoid, he said via an emailed message. But the changes resulted in a new robot that could handle boxes faster, could work longer hours, and could operate in tight spaces, such as a truck. So humanoid research led to a useful non-humanoid robot.

Some startups aiming for human-like machines focused on improving the dexterity of robotic fingers before trying to get their robots to walk.

Walking is not the hardest problem to solve in humanoid robotics, said Geordie Rose, co-founder and CEO of British Columbia, Canada-based startup Sanctuary AI. The hardest problem is the problem of understanding the world and being able to manipulate it with your hands.

Sanctuarys newest and first bipedal robot, Phoenix, can stock shelves, unload delivery vehicles and operate a checkout, early steps toward what Rose sees as a much longer-term goal of getting robots to perceive the physical world to be able to reason about it in a way that resembles intelligence. Like other humanoids, its meant to look endearing, because how it interacts with real people is a big part of its function.

We want to be able to provide labor to the world, not just for one thing, but for everybody who needs it, Rose said. The systems have to be able to think like people. So we could call that artificial general intelligence if youd like. But what I mean more specifically is the systems have to be able to understand speech and they need to be able to convert the understanding of speech into action, which will satisfy job roles across the entire economy.

Agilitys Digit robot caught Amazons attention because it can walk and also move around in a way that could complement the e-commerce giants existing fleet of vehicle-like robots that move large carts around its vast warehouses.

The mobility aspect is more interesting than the actual form, said Tye Brady, Amazons chief technologist for robotics, after the company showed it off at a media event in Seattle.

Right now, Digit is being tested to help with the repetitive task of picking up and moving empty totes. But just having it there is bound to resurrect some fears about robots taking peoples jobs, a narrative Amazon is trying to prevent from taking hold.

Agility Robotics co-founder and CEO Damion Shelton said the warehouse robot is just the first use case of a new generation of robots he hopes will be embraced rather than feared as they prepare to enter businesses and homes.

So in 10, 20 years, youre going to see these robots everywhere, Shelton said. Forever more, human- centric robots like that are going to be part of human life. So thats pretty exciting.

Today's breaking news and more in your inbox

Read the rest here:

Humanoid Robots Are Here, But They're A Little Awkward - Jamestown Post Journal

Read More..

Moonshot: Coexisting with AI holograms – The Edge Malaysia

This article first appeared in Digital Edge, The Edge Malaysia Weekly on November 13, 2023 - November 19, 2023

Imagine owning a holo-pet that is able to respond to your commands and play with you, whenever and wherever. Or having a holo-friend that can be your best pal without your having to step out of your home.

The complexities of human relationships often make life unpredictable and difficult at times. So, what if we were able to construct an artificial intelligence (AI) powered companion based on our preferences? One that is able to generate real-time responses in your interactions?

AI Holographic technology has risen to new heights recently, with the Hypervsn SmartV Digital Avatar being released at the start of the year. The AI hologram functions on the SmartV Window Display, a gesture-based 3D display and merchandising system, allowing for real-time interaction with customers.

At home, Universiti Teknologi Malaysia (UTM) has developed its first home-grown real-time holo professor, which is able to project a speech given by a lecturer who is in another place. With Malaysia breaking boundaries with extended reality (XR) technology, is it possible for the next wave of hologram technology to be fully AI-powered without constraints?

3D holographic display solution for your business by Holographic Technology (Photo by HYPERVSN)

The idea of interacting with holograms essentially boils down to humans interacting with computers. Interacting with computers usually comes with interacting with the keyboard or mouse but holograms take it a step further, making computer interaction seamless and more natural.

So ultimately, its just humans interacting with computers. But in the next paradigm shift, it is going to be so easy that at times, we wont even know that they are there, says Ivan Gerard Khoo, director of Ministry XR, a spatial computing solutions developer.

With generative AI advancing at a rapid rate, to integrate it into holograms would provide a greater immersive experience of interacting with computers around you.

Khoo shares his thoughts on AI being able to push past the barrier of computer interaction through a device with holographic technology, especially in older communities who might not be tech savvy.

Weve got a billion apps here, right? But its still not easy to use for everyone (like the handicapped or the elderly). Imagine all the apps in our phone right now [becoming] accessible in the environment around us. And the evolution has begun as the enabling technologies, although nascent, are here today, says Khoo.

In fact, a lot of researchers are seeing that we are actually moving towards an artificial general intelligence that may even develop sentience, chimes in Andrew Yew, founder and chief technology officer of Ministry XR.

As much as it is promising to develop artificial sentients, Yew mentions that no machine thus far has ever passed the Turing test convincingly, which determines whether AI is capable of thinking like a human being.

(Photo by UTM)

With minimalism on the rise, the focus turns to the technology and hardware surrounding integrating AI into holograms. Is it possible to create a hologram which is not restricted by a display enclosure?

In movies, you dont need anything and you [are able] to interact with the virtual world just like that. But in order to make it happen, you need hardware to make it work. You need to set up those things in such a way that it has all of that, so that it can trick your mind [and you think it is] holographic but actually, it is not, explains Kapil Chhabra, founder of Silver Wings XR Interactive Solutions Pte Ltd.

Holograms demonstrate an illusion of light rays reflected onto a medium. They are three-dimensional images generated by interfering beams of light that reflect real, physical objects.

Now, imagine AI bringing the technology of eye tracking into holographic figures, allowing them to have eye contact with humans. Olaf Kwakman, managing partner of Silver Wings XR Interactive Solutions, thinks that it is a brilliant solution as users do not need glasses anymore. Theres still technology needed but with eye tracking, you can create some kind of projection. And that works beautifully, he says.

Now, if you make these screens really large and all around you, you can basically project it any way you like. But were not quite there yet, Kwakman says.

The challenge with projecting holograms onto mediums is the ability to project it in such a way that it is invisible to the human eye, so that the holograms are more realistic. Chhabra says this has been a struggle for some time and he hopes that it can be made possible in the future.

Taking inspiration from the Apple VR Headsets pocket-sized and portable battery solutions, Kwakman says it has a very promising augmented reality visualisation but adds that the hardware needs to be further evolved into something smaller.

If you ask me, whats going to happen in the future is that youre not going to wear glasses anymore, youre going to wear some kind of small lens, which you can just put in your eye. And with a lens like that, you can project augmented reality in full, he says.

With AIs potential, it could bring realistic 3D holograms to new heights, where it fills in the gaps and makes the interactive experience much more engaging and powerful.

In order to realise full holographic and 3D visualisation, you need a strong connection as well, because theres a lot of data flowing, says Kwakman.

The lack of usage of holographic solutions is due to poor understanding and awareness of the benefits of the technology, which in turn hampers progress, he adds.

Its very difficult to envision the advantage it can bring to a company to introduce holographics, 3D visualisation solutions, and how it will actually benefit them. And, leaders find that troublesome as well, which means that it is difficult sometimes to get the budget for it, says Kwakman.

(Photo by Silver Wings)

Having created Malaysias first home-grown holo professor, Dr Ajune Wanis Ismail, senior lecturer in computer graphics and computer vision at UTMs Faculty of Computing, shares that XR hologram systems can be complex to set up and maintain. Technical issues, such as connectivity problems or software glitches, could disrupt lessons.

AI algorithms are used to enhance the accuracy of holographic content, reducing artifacts and improving image quality. These holographic solutions in extended reality (XR) technology come as a challenge as the technology is relatively new and is rapidly evolving with new breakthroughs occurring since then.

Building and deploying AI-powered holographic systems can be costly [in terms of hardware and software components].

Incorporating AI into holograms could pose an immense demand on computational power. Most of the existing holograms produce non real-time content with a video editing loop, but AI models for holography are computationally intensive, says Ajune.

She emphasises the importance of achieving high-fidelity reconstruction in handling complex dynamic scenes with objects or viewers in motion.

Researchers are developing more efficient algorithms and leveraging hardware acceleration [such as graphics processing units] to reduce computational demands, says Ajune on how achieving real-time interaction with holographic content demands low latency.

There is no doubt that XR holograms systems are complicated and a challenge to integrate with AI, however, the prospect of being able to replicate environments and enable real-time global communication without the need for physical presence spurs excitement.

As we advance into the era of digitalisation, people need to start familiarising themselves with this technology and become proficient users, believes Ajune.

There is a lot of information out there but the teachers are still sticking to conventional methods of teaching, and the students are not paying attention because they are on their phone [and] learning it [the info] themselves, says Yew.

With AI and XR hologram technology becoming more and more advanced, it is also pertinent to educate users and raise awareness about digital wellbeing.

There must be sensibility and responsibility from business owners and users in utilising XR and AI technology, as societys mindset drives the continued advancement of such technologies.

I think [what AI can do] is going to be amazing but at the same time, like many others, I also see the risks there. And sometimes it feels a bit scary, if so much power is given out [of the] hands of humans and with computers being able to do that, says Kwakman.

Save by subscribing to us for your print and/or digital copy.

P/S: The Edge is also available on Apple's App Store and Android's Google Play.

Excerpt from:

Moonshot: Coexisting with AI holograms - The Edge Malaysia

Read More..

Library talk guides seniors through wonders and dangers of AI … – countylive.ca

AI photograph generated for this story by Craiyon.com https://www.craiyon.com/ Request was photograph of library tech teaching AI to seniors group on ZOOM

By Sharon HarrisonA significant potential risk of artificial intelligence (AI) is that it outsmarts us, said Adam Cavanaugh, with the Picton branch of the Prince Edward County Public Library.

Adam Cavanaugh, PEC Public Library

He provided an introduction to County seniors curious to learn more about AI and ChatGPT in what is still very early days with an advancing technology many have yet to explore.

While its not as new as most like to think, as its been around in some basic form for decades, it is a subject thats grabbed headlines recently as the technology evolves. People are divided on whether AI is a good thing for us, for society, and for the world, with some experts and entrepreneurs advising caution, and a pause of AI development, as it rapidly develops, becoming a stronger presence in everyday lives, and all with little or no regulation and controls in place.

Hosted by the Prince Edward County Community Care Association for Seniors, Wednesdays webinar presentation formed part of the organizations active living monthly programming for Prince Edward County seniors, aged 60-plus.

Cavanaugh noted the presentation only scrapes the surface of what is a vast technological, and somewhat controversial area, where he focused on the history of AI, how intelligence is defined, as well as summarizing some of the technological advancements (specifically ChatGPT).

In what was a fascinating delve into a relatively new and rapidly advancing technology, Cavanaugh attempted to simplify what is a complex and complicated area, where he also spoke about the future of AI and regulation, and speculative risk factors.

He began by defining what AI is, all while noting how variable the definitions can be by providing text book examples.

What do we mean by AI: is it thinking humanly, or thinking rationally, or acting humanly, or acting rationally, and how would we define intelligence in the first place?

One example he gave is that AI could be defined by contrast of certain things.

You might say a snail is not as intelligent as an ape or a human, so, as long as its not that, its intelligent.

Cavanaugh also spoke to using definition, where he noted a standard cognitive philosophy definition of intelligence is: X is intelligent, if, and only if, X is capable of acquiring and holding true beliefs.

He further touched on common definitions, where he noted the standard definition of intelligence is just the ability to acquire and apply knowledge and skills.

So, by that common definition, if anything can develop its sense of smell to identify food sources, and form long-term memories to make sure they could access food, surely that would be somewhat intelligent, but that brings us back to the snail.

Even defining intelligence in the first place, he said, is no simple matter.

Experts disagree on what should constitute the intelligence in artificial intelligence, where one example of this model is that it should be able to pass as a human, or the Turing test (Alan Turing famous for computation and AI), he explained. The test was if you could simulate human communication, either by chat or in some manner; then if you could trick the person opposite the machine into believing they were talking to a human, and that would surely be sufficient to be able to say its intelligent.

Should it correspond to how humans empirically think because that would require a full theory of how our brains work?

He said cognitive science and neuroscientists would be the first to acknowledge that we dont really quite understand all of the inner workings of the human brain.

Should it correspond to our ideas of intelligence and which idea philosophical, mathematical, economic? Then there are engineering problems: can we even build it based on the appropriate model of intelligence, or will we have to make compromises, and what kinds of compromises?

Noting some problems with modelling AI as he asked how will intelligence show itself?

Do we make AI perform tasks, communicate to us, predict the future?

Cavanaugh explained how there are different types of AI, or what is known as weak AI, where he said many people already have experience of a version of this such as with Siri (digital assistant on phones, tablets and computers), for example, which is a weak AI, because it lacks understanding and a problem-solving ability.

Then there is this ideal form of AI, and this is what media is often talking about when they are expressing concerns about the future of AI, Cavanaugh said.

Known as artificial general intelligence, or strong AI, it is currently reserved for the domain of science fiction.

It is not yet created, it is not yet on the horizon; it can independently learn to replicate any cognitive task possible by humans, and it can do that independent of supervision by humans.

Cavanaugh noted the first conception of AI was by Warren McCulloch and Walter Pitts in 1943, and explained early AI capabilities began with general-problem solving, i.e., solving puzzles in a human-like fashion.

He said some people might be familiar with early examples from IBM that created some of the first AI programs.

Some of these general problem-solving programs were things like geometry theorem prover, which were able to prove mathematical problems that were very difficult for most students, he explained. Along the way, this disproved a theory that computers can only do exactly what they are told to do.

Moving on to ChatGPT, he explained that it is an AI natural language processing interface and acts like a chatbot where far-ranging requests can be made, and real-time responses received.

ChatGPT leverages large language models, as well as neural networks, to field requests from a large array of subject areas.

Cavanaugh used ChatGPT during the presentation to demonstrate examples of the types of questions that can be asked (asking it why it was in the news so much recently), where he shared the immediate real-time response in mere seconds.

He then addressed the issue of whether regular folks should be using ChatGPT.

It seems likely that ChatGPT, or something like it, will become an important tool for accessing information, and refining online queries, he said. All we have to do is think about the impact of something like Google search on the early internet and see how reliant we are on it today.

He suggested it may be useful to become familiar with how ChatGPT works, something that can be done relatively easily by creating an account with an email address (https://openai.com); once an account is created, you can simply chat with the AI.

However, Cavanaugh did provide caution, especially given that it doesnt provide sources for its research, and the information has limitations and isnt always accurate.

ChatGPT is like a plain-language Google. Moreover, it refines its answers through usage and correction, so the more people use it, the better it is at doing its job, and ours, he said. The more intelligent ChatGPT becomes, the more helpful it will be, and the less we have to work for our information and knowledge.

He noted that ChatGPT sources are unknown, meaning we dont know how reliable the information received is.

Even though it sounds plausible, ChatGPT does not cite its sources, he emphasised. They took a wide set of data from the internet and trained computers to interpret them, and act on them, and then learn from the mistakes they make in acting on that information.

He said that doesnt mean all of the information is true.

It basically scooped all of this information from the internet, but as we all know, not all of the information on the internet is itself accurate.

The more powerful ChatGPT, and other AI, become, the more urgent it is that we create models of controls and norms around use, he said.

He noted that university students are already using ChatGPT to write essays, and professors cant accuse them of this because the information is auto-generated, rather than found in an original or secondary source.

For the unscrupulous, this will remove the necessity to do the work, allowing them to earn a degree without study or knowledge acquisition, he explained.

Extend this knowledge on a wide scale and we can understand that the trend would point toward outsourcing our learning and knowledge to computers: why would I need to know this if I have ChatGPT?

He said, this points to a more critical issue of control.

ChatGPT makes stuff up all the time; its a pretty prolific liar as well, so we have to vet all information you get from it, so its not really an independent research tool.

He said, AI would not necessarily demonstrate intelligence in a way possible to, and to provoke anthropomorphize, or make human, but it could become so super intelligent as to warrant comparison between us and a worm, that is, completely incomparable.

Some skills that could make such risks include intelligence amplification, or AI becoming more and more smart without us needing to put research and development into its software strategization.

If AI were able to start making strategic decisions without human supervision: social manipulation, if it was able to maybe start leveraging the fact that it knows how to use chat softwares to communicate and manipulate humans into doing tasks for it, he explained.

If it could use the three above points to make strategical interventions on say governments, or industries, technological research, if it could do its own research and development, and economic productivity, if it could generate its own funds to do its own technological research and we are starting to see the picture unfold.

While the world figures out how AI and ChatGPT will play a role in everyday lives, Cavanaugh suggests to have fun with it for the time being, with low-risk uses.

One example he demonstrated was by giving it a list of ingredients found in the pantry, and asking it to come up with a recipe (which it did quite successfully).

The Community Care for Seniors Active Living programs for those 60-plus are available five days a week, with more than 50 online events each month. Online Zoom fitness and arts classes along with socials are held Monday to Friday.

In November, Zoom webinar topicsinclude: Sleep: Are you Getting Enough? with Tammy Orr and Janice Hall from the Prince Edward Family Health Team; and Nearby and Natural, and Nature West in Quinte West and Beyond, both with naturalist Terry Sprague. Community Care now offers a phone-only option for these Zoom webinars (no computer is needed). Several in-person events this month include 55-Alive Safe Driving Course, and Stronger U Fitness Course with Tracy Young-Reid.

Community Care for Seniors offers an extensive array of programming, services, resources and help for seniors living in Prince Edward County. To learn more, they can be reached by phone at 613-476-7493, by email, info@communitycareforseniors.org, or visit the website at communitycareforseniors.org

Read the original:

Library talk guides seniors through wonders and dangers of AI ... - countylive.ca

Read More..

5 things about AI you may have missed today: Samsung Gauss unveiled, Meta asks for disclosure on political AI ads, more – HT Tech

Today, November 8, was another exciting day for the artificial intelligence field as major tech firms made headlines for forays in AI. In the first incident, Samsung introduced a new generative AI model, Samsung Gauss, during its AI forum. The company claims that it can run locally on devices, and some reports are suggesting that it can be introduced in the Galaxy S24 series. In other news, Meta will require advertisers to disclose when political or social issue ads have been created or altered by AI. This will begin starting 2024. This and more in todays AI roundup. Let us take a closer look.

Samsung is developing a new generative AI model called Samsung Gauss, which can run locally on devices. According to a report by Korea Times, Gauss can be integrated into the Galaxy S24 series and will be able to generate and edit images, compose emails, summarize documents, and even operate as a coding assistant. Parts of the Gauss model can run locally on the device, which will improve performance and privacy. Samsung plans to start adding generative AI to more of its products in the future.

Samsung Gauss Language, a generative language model, enhances work efficiency by facilitating tasks such as composing emails, summarizing documents, and translating content. It can also enhance the consumer experience by enabling smarter device control when integrated into products, said Samsung in a press release.

Meta will soon require advertisers to disclose when political or social issue ads have been created or edited by AI, as per a report by Reuters. This is being done to prevent users from being fed misinformation.

The rules will into effect in 2024 and will require advertisers to disclose when AI or other digital tools are used in Facebook or Instagram ads on social issues, elections, or politics. Advertisers will need to say when AI is used to depict real people doing or saying something they didnt actually do or when a digitally created person or event is made to look realistic, among other cases.

As per a report by Reuters, Amazon is investing millions in training an ambitious large language model (LLMs), hoping it could rival top models from OpenAI and Alphabet. Reuters was given this information by sources familiar with the matter, who asked to remain anonymous.

The model, codenamed Olympus, has reportedly 2 trillion parameters, the people said, which could make it one of the largest models being trained. OpenAI's GPT-4 model, one of the best models available, is reported to have one trillion parameters.

The team is spearheaded by Rohit Prasad, former head of Alexa, who now reports directly to CEO Andy Jassy. As head scientist of artificial general intelligence (AGI) at Amazon, Prasad brought in researchers who had been working on Alexa AI and the Amazon science team to work on training models, uniting AI efforts across the company with dedicated resources.

Multiple nations will hold their general elections next year and as the political campaigns begin, Microsoft has announced it will be offering its services to help crack down on deepfakes. Microsoft said in a blog post, Over the next 14 months, more than two billion people around the world will have the opportunity to vote in nationwide elections. From India to the European Union, to the United Kingdom and the United States, the worlds democracies will be shaped by citizens exercising one of their most fundamental rights. But while voters exercise this right, another force is also at work to influence and possibly interfere with the outcomes of these consequential contests.

As detailed in a new threat intelligence assessment published today by Microsofts Threat Analysis Center (MTAC), the next year may bring unprecedented challenges for the protection of electionsThe world in 2024 may see multiple authoritarian nation-states seek to interfere in electoral processes. And they may combine traditional techniques with AI and other new technologies to threaten the integrity of electoral systems, it added.

A new health tech startup called Cercle has launched, using AI to advance women's health, particularly in fertility care, reports CNBC. The company's platform organizes unstructured medical data into a standardized format for fertility doctors and researchers, in the hopes of helping clinicians develop more personalized treatment plans and accelerate discoveries in pharmaceuticals.

Continued here:

5 things about AI you may have missed today: Samsung Gauss unveiled, Meta asks for disclosure on political AI ads, more - HT Tech

Read More..

Is Regulation an AI Tailwind? – Opto RoW

In the wake of landmark AI legislation in the US and a key international summit at Bletchley Park in the UK, the regulatory framework around artificial intelligence is taking shape. Reports from Global X and Goldman Sachs suggest that this could be a positive development for investors.

Regulation has been one of the keywords buzzing around the artificial intelligence (AI) community over the past 12 months.

In May, OpenAI CEO Sam Altman testified to a Senate subcommittee that the increasingly powerful technology was in need of regulation. Then, in late October and early November, governments swung into action. US President Joe Bidens executive order on AI established a new legal framework for the technology in the US, while UK Prime Minister Rishi Sunaks Bletchley Park summit laid the foundations for supranational cooperation.

The maturing regulatory framework around AI could, according to research from Goldman Sachs, catalyse a flurry of M&A activity. As clarity is gained and AI use cases continue to evolve, the M&A landscape will shift, according to a white paper entitled Navigating the AI Era: How Can Companies Unlock Long-Term Strategic Value?

The process had already begun in January this year, when large incumbent technology companies began investing in or acquiring AI start-ups in some instances, purely for the sake of acquiring the skills of their workforce, asserts Goldman Sachs in an October report entitled How to Unlock an AI-driven M&A Supercycle.

In a similar vein, Global X ETFs published research in October suggesting that AI is having a positive impact on the IPO market. New US listings raised $7.2bn in September, the largest amount in a single month since late 2021.

Arms [ARM] September IPO was the flagship event on this front, with a first-day closing valuation of $67.9bn underscoring, in Global Xs view, the red-hot demand in the AI chip market.

A maturing regulatory framework will ensure AI systems are safe, secure, and trustworthy before companies make them public, which should undoubtedly increase investor confidence, Ido Caspi, Private Markets Analyst at Global X ETFs and author of the report, told OPTO.

Caspi also believes that bolstering public trust in AI systems will be beneficial for their development. New regulations can help de-risk investments made by companies in building foundational models from regulatory uncertainty, which could spur further investments in innovative experimentation, he added.

Caspis is far from a lone voice. The Deloitte AI Institute has also published a report entitled Trust in the Era of Generative AI, which made the case that trust in the technology is crucial to its ongoing success.

To prepare the enterprise for a bold and successful future with Generative AI, we need to better understand the nature and scale of the risks, as well as the governance tactics that can help mitigate them, wrote Beena Ammanath, Global & US Technology Trust Ethics Leader at Deloitte and author of the report.

Deloitte maps six domains of trust for AI: fairness and impartiality, transparency, safety and security, accountability, responsibility and privacy. While a maturing regulatory framework will increase the accountability of businesses that develop or use generative AI by documenting impartiality and modelling transparency, for example they also present an opportunity for AI companies to build their trust in the public eye.

These moves should be viewed as a new guidance opportunity to enable secure AI implementation in business endeavours, in addition to spearheading the long-term success of AI, Ammanath told OPTO.

Recent weeks have seen key milestones in the development of national and supranational efforts to regulate AI.

Bidens executive order, signed on 30 October, was a landmark piece of legislation in the US. It put into law a requirement for new models to share safety test results with the US government before they are released. On announcing the executive order to the press, Biden said that he was determined to do everything in my power to promote and demand responsible innovation, reported Forbes.

Later that week, the UK played host to the worlds first international summit focused on supranational AI governance. Held at Bletchley Park, where Alan Turings work during World War II famously marked a major advance in the development of modern computing, the summit saw the governments of the US, UK, EU, China and others agree to collaborate on mitigating the risks that could potentially arise from AI technology.

Some technology leaders such as Jack Clark, Co-founder and Head of Policy at Anthropic have been supportive of the recent step change in government efforts to regulate AI. Joe Bidens executive order has made meaningful steps towards creating third-party measurement and oversight of AI systems, Clark told Forbes.

Not everyone has been so positive. Metas [META] Chief AI Scientist Yann LeCun has branded attempts to regulate the research and development of AI as incredibly counterproductive. According to the Financial Times, LeCun accused regulations proponents of wanting regulatory capture under the guise of AI safety.

At the crux of LeCuns argument was the notion that regulation would suit only the largest technology firms, effectively limiting the development of AI to that small handful of companies.

However, Demis Hassabis, Founder and CEO of Alphabets [GOOGL] DeepMind, later responded that he disagreed with LeCun. Hassabis outlined three core areas of concern: near-term risks, such as misinformation and deepfakes; the repurposing of AI systems for malicious, unforeseen purposes by bad actors; and long-term risks such as the potential threat that artificial general intelligence may pose to humanity.

Both Meta and Alphabet are among the so-called Magnificent Seven stocks that have seen outsized gains this year thanks to the rise of AI.

Alphabet is a particularly interesting prospect for investors focused on the development of regulation, thanks to its considerable stakes in Anthropic, which describes itself as an AI safety and research company, and DeepMind, which has played a part in advising the British government on AI regulation, according to CNBC.

Alphabets share price is up 49.4% year-to-date, while Metas has soared 165.7%.

Alternatively, investors could select an ETF such as the Global X Artificial Intelligence & Technology ETF [AIQ]. Both Alphabet and Meta are among the funds top five holdings as of 8 November. As of 31 October, information technology accounts for 63.2% of the funds holdings, while communication services (14.1%), consumer discretionary (11.6%), industrials (8.9%), financials (1.3%) and healthcare (1%) account for the rest. AIQ is up 39.5% year-to-date.

Disclaimer Past performance is not a reliable indicator of future results.

CMC Markets is an execution-only service provider. The material (whether or not it states any opinions) is for general information purposes only, and does not take into account your personal circumstances or objectives. Nothing in this material is (or should be considered to be) financial, investment or other advice on which reliance should be placed. No opinion given in the material constitutes a recommendation by CMC Markets or the author that any particular investment, security, transaction or investment strategy is suitable for any specific person.

The material has not been prepared in accordance with legal requirements designed to promote the independence of investment research. Although we are not specifically prevented from dealing before providing this material, we do not seek to take advantage of the material prior to its dissemination.

CMC Markets does not endorse or offer opinion on the trading strategies used by the author. Their trading strategies do not guarantee any return and CMC Markets shall not be held responsible for any loss that you may incur, either directly or indirectly, arising from any investment based on any information contained herein.

*Tax treatment depends on individual circumstances and can change or may differ in a jurisdiction other than the UK.

Continue reading for FREE

Error! Please try submitting again.

Success! You have successfully signed up.

Read more:

Is Regulation an AI Tailwind? - Opto RoW

Read More..

B.C. quantum computer maker Photonic emerges from ‘stealth mode … – The Globe and Mail

Images are unavailable offline.

Founder and chief quantum officer, Stephanie Simmons, poses for a photograph at the Photonic Inc. lab in Coquitlam, B.C.

Tijana Martin/The Globe and Mail

Canadas third entrant in the global race to build a quantum computer has emerged from stealth mode to reveal its technology, while announcing US$140-million ($193-million) in funding and unveiling a partnership with software giant Microsoft Corp.

Vancouver-based Photonic Inc. said Wednesday it plans to build a quantum computer using silicon chips that are networked with light, a relatively new approach that the seven-year-old startup said would enable the creation of marketable machines within five years.

What were bringing to the table is the fact that the network is the computer, Photonic founder and chief quantum officer Stephanie Simmons said in an interview.

Story continues below advertisement

The 120-person company said its collaboration with Microsoft would allow users to access its quantum system through Microsofts Azure cloud computing network. Krysta Svore, Microsofts vice-president of advanced quantum development, said unlike commercial agreements with other quantum computer makers operating on Azure, the Photonic deal is a co-innovation collaboration to promote quantum networking. Microsoft will offer Photonic as a preferred hardware provider for customers doing computational chemistry and material science discovery.

Microsoft MSFT-Q has also backed a US$100-million ($138-million) venture capital financing of Photonic also announced Wednesday alongside British Columbia Investment Management Corp., the British governments National Security Strategic Investment Fund, Inovia Capital, Yaletown Partners and Amadeus Capital Partners. Photonic previously raised US$40-million ($55-million) from investors including veteran technology executive Paul Terry, who became chief executive officer in 2019, and former Microsoft president Don Mattrick.

Inovia partner Shawn Abbott said hed watched the quantum computing space for 20 years before deciding to back Photonic. Ive felt others were too early for the 10-year life of a venture fund they were still science projects. Photonic is the first Ive seen with the potential to scale quickly into a full platform.

Photonics networking model is in keeping with what many in the field regard as a promising direction for scaling up quantum computers to commercial relevancy.

Story continues below advertisement

I think everybody in the industry has realized by now that networking is needed no matter what platform you think about, said Prem Kumar, a professor of electrical and computer engineering at Northwestern University in Evanston, Ill.

At stake is the prospect of a new kind of device that can easily outperform conventional computers at certain kinds of calculations. In principle, a quantum computer could break encryption codes used to protect financial information while providing a new form of impenetrable encryption. Quantum systems could also be used to predict the behaviour of molecules and help discover materials and drugs or optimize decision making in dynamic situations, from traffic grids to financial markets.

Quantum computers achieve such feats by replacing a conventional computers bits its 1s and 0s with qubits that have an indeterminate value until they are measured. When qubits are linked together through a phenomenon known as entanglement, these uncertainties can be harnessed to solve in mere seconds calculations that could tie up a regular computer for eons.

While some quantum systems operating today have reached the level of hundreds to more than 1,000 qubits, commercial quantum systems are expected to require millions.

Story continues below advertisement

Developers have explored a range of design options for creating such computers, but all come with technical hurdles. Those based on the physical properties of subatomic particles are easy to disturb, and their systems require extreme cooling to reduce vibrations. Those that use entangled particles of light, or photons, have the problem that light cannot be stored, and that photons can be lost while travelling through a fibre optic network.

Despite the challenges, startups and tech giants alike are in a global race to create a commercial quantum computer. A few companies, including Google and Torontos Xanadu Quantum Technologies, have proven their machines can achieve quantum advantage, by performing certain theoretical operations faster than existing computers. But while such demonstrations are regarded as milestones, they fall well short of the goal of building a practical quantum computer, in part because they lack fault tolerance the need for a quantum system to dedicate the majority of its qubits to correcting errors and providing reliable answers. They also arent close to performing tasks commercial customers would pay for.

Some quantum computing startups including D-Wave Quantum, Inc. of Burnaby, B.C., the first company to commercialize a limited form of quantum computer have tested the public markets, although demand has been limited. D-Wave, which went public last year, generated just US$3.2-million ($4.4-million) in revenue in the first half and racked up US$46.7-million ($64-million) in operating expenses. Its stock trades for pennies a share.

Photonic is the brainchild of Dr. Simmons, who grew up in Kitchener, Ont., and decided at 16 to devote her life to the field after learning of the creation of the Institute of Quantum Computing close by. I said, This has to be it, this must be the next wave, it will be so fun, the 38-year-old said.

Story continues below advertisement

She decided to build her own quantum computer while studying math and physics at the University of Waterloo after learning that the technology was still in its infancy. First she earned a PhD in material science at Oxford University, then studied electrical engineering at the University of New South Wales in Sydney. She moved to B.C. in 2015, believing Vancouver was the best place to recruit talent. She taught physics at Simon Fraser University and founded Photonic in 2016.

Dr. Simmons felt early quantum computer attempts werent working backwards from the long-term solution, which I thought was going to be a horizontally scalable supercomputer.

To achieve scalability, she opted to work with silicon chips, a well-understood material in the computer industry. The chips are cooled to one degree above absolute zero, or -273.15 C colder than deep space but a less demanding threshold than some kinds of quantum computers with qubits that must be kept even colder.

The Photonic systems qubits consist of tiny flaws within the silicon material whose quantum properties can be transmitted and manipulated using light. This opens the possibility of building up a distributed network of chips connected by optical fibres to perform quantum calculations instead of a single, large processor, as other developers have done.

Dr. Simmons said such a system would be able to exploit new approaches to error correction and produce a fault tolerant quantum computer. The bringing together of the networking and computational side of quantum technology has won support from investors in part because it addresses both how to reliably do calculations and how to convey information securely.

Story continues below advertisement

With Stefs architecture you get a 90-per-cent-plus efficiency of transferring the quantum state, Amadeus co-founder Hermann Hauser said. Thats why I think it will become the dominant quantum computing architecture.

Read the original here:
B.C. quantum computer maker Photonic emerges from 'stealth mode ... - The Globe and Mail

Read More..

Encoding quantum information in states of light – Laser Focus World

It makes for a hefty hardware overhead. And it only works if the fidelity of the physical qubits is good enough, which puts a huge burden on fidelity. So, this is the obstacle: an acute physical problem that error correction alone can at best turn into a massive engineering problem. What to do?

Many physical platforms are currently competing to make quantum computing a reality. Indeed, many different quantum systems can play the role of the qubit, each with its own strengths and weaknesses. Think of trapped ions, neutral atoms, superconducting circuits, and even photons. Yes, light itself can be used to encode quantum information.

Optical photons bring together many advantages for quantum computing. They are easily produced and can be routed in optical fibers, propagating over long distances and remaining coherent for long times at room temperature, which means they dont require expensive cryostats. Companies like Xanadu in Canada or Quandela in France have developed promising approaches to photonic quantum computing. All in all, its a great platform for scaling, but its much harder to run operations between qubits and program the quantum computer. This makes it more difficult to build all the necessary gate operations to achieve universality.

But it isnt the only way optics can provide a key tool in the operation of a quantum computer. Other platforms rely heavily on optics to control and measure quantum systems. Lasers are used to read out the states of trapped ions, optical tweezers to manipulate the states of neutral atoms, and microwave photons to control superconducting circuits.

There are even state-of-the-art approaches to quantum computing where ideas from quantum optics provide more than just a tooland provide a method that directly addresses the biggest problem in quantum information.

The idea here is to attack the error problem head on: Schrdinger cat states are quantum superpositions of two coherent states of light that are effectively mirror images of one another.

The quantum logical 0 is a collective state of photons in which they all share the same amplitude and phase. It corresponds to the state of light created by a laser. The logical 1 is the same state except that the phase of each photon is the opposite. We take the same laser light as we did before, but delay it just as much as needed so that all photons have the opposite phase of the ones in our first beam.

Such states are often referred to as classical because they correspond to the usual excitations of resonators: using mirrors to trap the light of a laser in an optical cavity, the corresponding coherent state inside is described mathematically in the same way as a mass oscillating at the end of a spring.

The laws of quantum mechanics allow us to prepare not only these two distinct coherent states, but also superpositions of them. In the laser analogy, this would correspond to the laser emitting the same photons with two different phases at the same time. These states are called Schrdinger cat states, named after the famous thought experiment in which a cat could be both dead and alive due to quantum effects. Schrdingers aim was to show how absurd it would be if the principle of quantum superposition could be transposed to our classical world.

In the present case, no cats are harmed, but the idea is the same: we can generate and observe coherent quantum superpositions of classical states, not of cats, but of light. And the idea and first realization of these states originated in optics. Cat states of photons at microwave frequencies were then realized and French Physicist Serge Haroche was awarded a Nobel Prize in Physics in 2012 for this groundbreaking work in quantum optics.

Whats the connection with the error problem? At Alice & Bob, we use superconducting circuits to generate, stabilize, and control qubits based on Schrdinger cat states (see Fig. 2). Cat states are interesting quantum objects that can teach us a lot about the fundamentals of quantum mechanics, but our goal is to create practical quantum computers. And it turns out cat qubits have one particular property that makes them eminently suitable for fault-tolerant quantum computing: a built-in ability to resist bit-flip errors.

See the article here:
Encoding quantum information in states of light - Laser Focus World

Read More..