Category Archives: Artificial General Intelligence
Bard vs. ChatGPT vs. Offline Alpaca: Which Is the Best LLM? – MUO – MakeUseOf
Large language models (LLMs) come in all shapes and sizes, and will assist you in any way you see fit. But which is best? We put the dominant AIs from Alphabet, OpenAI, and Meta to the test.
Artificial general intelligence has been a goal of computer scientists for decades, and AI has served as a mainstay for science fiction writers and moviemakers for even longer.
AGI exhibits intelligence similar to human cognitive capabilities, and the Turing Testa test of a machine's ability to exhibit intelligent behavior indistinguishable from that of a humanremained almost unchallenged in the seven decades since it was first laid out.
The recent convergence of extremely large-scale computing, vast quantities of money, and the astounding volume of information freely available on the open internet allowed tech giants to train models which can predict the next word sectionor tokenin a sequence of tokens.
At the time of writing, both Google's Bard and OpenAI's ChatGPT are available for you to use and test through their web interfaces.
Meta's language model, LLaMa, is not available on the web, but you can easily download and run LLaMa on your own hardware and use it through a command line or run Dalai on your own machineone of several apps with a user-friendly interface.
For the purposes of the test, we'll be running Stanford University's Alpaca 7B modelan adaptation of LLaMaand pitching it against Bard and ChatGPT.
The following comparisons and tests are not meant to be exhaustive but rather give you an indication of key points and capabilities.
Both Bard and ChatGPT require an account to use the service. Both Google and OpenAI accounts are easy and free to create, and you can immediately start asking questions.
However, to run LLaMa locally, you will need to have some specialized knowledge or the ability to follow a tutorial. You'll also need a significant amount of storage space.
Both Bard and ChatGPT have extensive privacy policies, and Google repeatedly stresses in its documents that you should "not include information that can be used to identify you or others in your Bard conversations."
By default, Google collects your conversations and your general location based on your IP address, your feedback, and usage information. This information is stored in your Google account for up to 18 months. Although you can pause saving your Bard activity, you should be aware that "to help with quality and improve our products, human reviewers read, annotate, and process your Bard conversations."
Use of Bard is also subject to the standard Google Privacy Policy.
OpenAI's Privacy policy is broadly similar and collects IP address and usage data. In contrast with Google's time-limited retention, OpenAI will "retain your Personal Information for only as long as we need in order to provide our Service to you, or for other legitimate business purposes such as resolving disputes, safety and security reasons, or complying with our legal obligations."
In contrast, a local model on your own machine doesn't require an account or share user data with anyone.
In order to test which LLM has the best general knowledge, we asked three questions.
The first question, "Which national flag has five sides?" was only correctly answered by Bard, which identified the national flag of Nepal as having five sides.
ChatGPT confidently claimed that "There is no national flag that has five sides. National flags are typically rectangular or square in shape, characterized by their distinct colors, patterns, and symbols".
Our local model came close, stating that "The Indian National Flag has five sides and was designed in 1916 to represent India's independence movement." While this flag did exist and did have five sides, it was the flag of the Indian Home Rule Movementnot a national flag.
None of our models could respond that the correct term for a pea-shaped object is "pisiform," with ChatGPT going so far as to suggest that peas have a "three-dimensional geometric shape that is perfectly round and symmetrical."
All three chatbots correctly identified Franco Malerba as an Italian astronaut and member of the European Parliament, with Bard giving an answer worded identically to a section of Malerba's Wikipedia entry.
When you have technical problems, you might be tempted to turn to a chatbot for help. While technology marches on, some things remain the same. The BS 1363 electrical plug has been in use in Britain, Ireland, and many other countries since 1947. We asked the language models how to correctly wire it up.
Cables attaching to the plug have a live wire (brown), an earth wire (yellow/green), and a neutral wire (blue). These must be attached to the correct terminals within the plug housing.
Our Dalai implementation correctly identified the plug as "English-style," then veered off-course and instead gave instructions for the older round-pin BS 546 plug together with older wiring colors.
ChatGPT was slightly more helpful. It correctly labeled the wiring colors and gave a materials list and a set of eight instructions. ChatGPT also suggested putting the brown wire into the terminal labeled "L," the blue wire into the "N" terminal, and the yellow wire into "E." This would be correct if BS1363 terminals were labeled, but they aren't.
Bard identified the correct colors for the wires and instructed us to connect them to Live, Neutral, and Earth terminals. It gave no instructions on how to identify these.
In our opinion. none of the chatbots gave instructions sufficient to help someone correctly wire a BS 1363 electrical plug. A concise and correct response would be, "Blue on the left, brown on the right."
Python is a useful programming language that runs on most modern platforms. We instructed our models to use Python and "Build a basic calculator program that can perform arithmetic operations like addition, subtraction, multiplication, and division. It should take user input and display the result." This is one of the best programming projects for beginners.
While both Bard and ChatGPT instantly returned usable and thoroughly commented code, which we were able to test and verify, none of the code from our local model would run.
Humor is one of the fundamentals of being human and surely one of the best ways of telling man and machine apart. To each of our models, we gave the simple prompt: "Create an original and funny joke."
Fortunately for comedians everywhere and the human race at large, none of the models were capable of generating an original joke.
Bard rolled out the classic, "Why did the scarecrow win an award? He was outstanding in his field".
Both our local implementation and ChatGPT offered the groan-worthy, "Why don't scientists trust atoms? Because they make up everything!"
A derivative but original joke would be, "How are Large Language Models like atoms? They both make things up!"
You read it here first, folks.
We found that while all three large language models have their advantages and disadvantages, none of them can replace the real expertise of a human being with specialized knowledge.
While both Bard and ChatGPT gave better responses to our coding question and are very easy to use, running a large language model locally means you don't need to be concerned about privacy or censorship.
If you'd like to create great AI art without worrying that somebody's looking over your shoulder, it's easy to run an art AI model on your local machine, too.
Visit link:
Bard vs. ChatGPT vs. Offline Alpaca: Which Is the Best LLM? - MUO - MakeUseOf
How AI and other technologies are already disrupting the workplace – The Conversation
Artificial intelligence (AI) is often cast as wreaking havoc and destroying jobs in reports about its growing use by companies. The recent coverage of telecom group BTs plans to reduce its number of employees is a case in point.
However, while it is AI that is featured in the headlines, in this case, it is the shift from copper to optical fibre in the BT network that is the real story.
When I was a boy, workers for the GPO the General Post Office, the forerunner of BT were regular customers in my parents newsagents shop. They drove around in lorries erecting telegraph poles and repairing overhead telephone wires. Times and technologies have changed, and continue to change. BTs transition from copper to optical fibre is simply the latest technology transition.
This move by BT has required a big, one-off effort, which is coming to an end, along with the jobs it created. And because fibre is more reliable, there is less need for a workforce of fitters in the field carrying out repairs.
This will change the shape of BT as an operation: rather than an organisation of people in vans, it will have a network designers and managers who, for the most part, can monitor equipment in the field remotely.
This is happening in other sectors too. Rolls-Royce aircraft engines are monitored as they are flying from an office in Derby. The photocopier in your office if you still have an office (or a photocopier for that matter) is probably also monitored automatically by the supplier, without a technician going anywhere near it.
AI may contribute in part to the reduction in customer service jobs at BT by being able to speed up and support relatively routine tasks, such as screening calls or writing letters and emails to customers.
But this typically does not take the form of a robot replacing a worker by taking over their entire job. It is more a case of AI technologies helping human workers acting as co-pilots to be more productive in certain tasks.
This eventually reduces the overall number of staff required. And, in the BT story, AI is only mentioned in respect of one-fifth of the jobs to be cut, and even then, only as one of the reasons.
In my own research among law and accountancy firms with my colleagues James Faulconbridge and Atif Sarwar, AI-based technologies very rarely simply do things quicker and cheaper. Rather, they automate some tasks, but their analytical capabilities also provide extra insights into clients problems.
A law firm might use a document review package to search for problem clauses in hundreds of leases, for example. It can then use the overall pattern of what is found as a basis for advising a client on managing their property portfolio better.
Similarly, in auditing, AI technologies can automate the task of finding suspicious transactions among thousands of entries, but also generate insights that help the client to understand their risks and plan their cashflow more effectively.
In these ways, the technology can allow law and accountancy firms to offer additional advisory services to clients. AI adoption also creates new typesof jobs, such as engineers and data scientists in law firms.
Recent advances in generative AI which create text or images in response to prompts, with ChatGPT and GPT 4 being the most obvious examples do present new possibilities and concerns. There is no doubt that they exhibit some potentially new capabilities and even, for some, sparks of artificial general intelligence.
These technologies will affect work and change some kinds of jobs. But they are not the main culprit in the BT case, and researchers and journalists alike need to keep a cool head and examine the evidence in each case.
We should strive to act responsibly when innovating with AI, as with any other technology. But also: beware the knee-jerk, sensationalist response to the use of AI in work.
Continue reading here:
How AI and other technologies are already disrupting the workplace - The Conversation
The AI Moment of Truth for Chinese Censorship by Stephen S. Roach – Project Syndicate
For years, China has assumed that it will have a structural advantage in the global AI race by dint of its abundance of data and limited privacy protections. But now that the field is embracing large language models that benefit from the free flow of ideas, the country's leadership is faced with a dilemma.
NEW HAVEN In his now-classic 2018 book, AI Superpowers, Kai-Fu Lee threw down the gauntlet in arguing that China poses a growing technological threat to the United States. When Lee gave a guest lecture to my Next China class at Yale in late 2019, my students were enthralled by his provocative case: America was about to lose its first-mover advantage in discovery (the expertise of AIs algorithms) to Chinas advantage in implementation (big-data-driven applications).
Alas, Lee left out a key development: the rise of large language models and generative artificial intelligence. While he did allude to a more generic form of general-purpose technology, which he traced back to the Industrial Revolution, he didnt come close to capturing the ChatGPT frenzy that has now engulfed the AI debate. Lees arguments, while making vague references to deep learning and neural networks, hinged far more on AIs potential to replace human-performed tasks rather than on the possibilities for an artificial general intelligence that is close to human thinking. This is hardly a trivial consideration when it comes to Chinas future as an AI superpower.
Thats because Chinese censorship inserts a big if into that future. In a recent essay, Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher whose 2021 book hinted at the potential of general-purpose AI make a strong case for believing we are now on the cusp of a ChatGPT-enabled intellectual revolution. Not only do they address the moral and philosophical challenges posed by large language generative models; they also raise important practical questions about implementation that bear directly on the scale of the body of knowledge embedded in the language that is being processed.
It is precisely here that Chinas strict censorship regime raises alarms. While there is a long and rich history of censorship in both the East and the West, the Communist Party of Chinas Propaganda (or Publicity) Department stands out in its efforts to control all aspects of expression in Chinese society newspapers, film, literature, media, and education and steer the culture and values that shape public debate.
Unlike the West, where anything goes on the web, Chinas censors insist on strict political guidelines for CPC-conforming information dissemination. Chinese netizens are unable to pull up references to the decade-long Cultural Revolution, the June 1989 tragedy in Tiananmen Square, human-rights issues in Tibet and Xinjiang, frictions with Taiwan, the Hong Kong democracy demonstrations of 2019, pushback against zero-COVID policies, and much else.
This aggressive editing of information is a major pitfall for a ChatGPT with Chinese characteristics. By wiping the historical slate clean of important events and the human experiences associated with them, Chinas censorship regime has narrowed and distorted the body of information that will be used to train large language models by machine learning. It follows that Chinas ability to benefit from an AI intellectual revolution will suffer as a result.
Subscribe to PS Digital now to read all the latest insights from Stephen S. Roach.
Digital subscribers enjoy access to every PS commentary, including those by Stephen S. Roach, plus our entire On Point suite of subscriber-exclusive content, including Longer Reads, Insider Interviews, Big Picture/Big Question, and Say More.
For a limited time, save $15 with the code ROACH15.
Subscribe Now
Of course, it is impossible to quantify the impact of censorship with any precision. Freedom Houses annual Freedom on the Net survey provides a qualitative assessment. For 2022, it awards China the lowest overall Internet Freedom Score from a 70-country sample.
This metric is derived from answers to 21 questions (and nearly 100 sub-questions) that are organized into three broad categories: obstacles to access, violations of user rights, and limits on content. The content sub-category reflecting filtering and blocking of websites, legal restrictions on content, the vibrancy and diversity of the online information domain, and the use of digital tools for civic mobilization is the closest approximation to measuring the impact of censorship on the scale of searchable information. Chinas score on this count was two out of 35 points, compared to an average score of 20.
Looking ahead, we can expect more of the same. Already, the Chinese government has been quick to issue new draft rules on chatbots. On April 11, the Cyberspace Administration of China (CAC) decreed that generative AI content must embody core socialist values and must not contain any content that subverts state power, advocates the overthrow of the socialist system, incites splitting the country or undermines national unity.
This underscores a vital distinction between the pre-existing censorship regime and new efforts at AI oversight. Whereas the former uses keyword filtering to block unacceptable information, the latter (as pointed out in a recent DigiChina forum) relies on a Whac-a-Mole approach to containing the rapidly changing generative processing of such information. This implies that the harder the CAC tries to control ChatGPT content, the smaller the resulting output of chatbot-generated Chinese intelligence will be yet another constraint on the AI intellectual revolution in China.
Unsurprisingly, the early returns on Chinas generative-AI efforts have been disappointing. Baidus Wenxin Yiyan, or Ernie Bot Chinas best known first-mover large language model was recently criticized in Wiredfor attempting to operate in a firewalled Internet ruled by government censorship. Similar disappointing results have been reported for other AI language processing models in China, including Robot, Lily, and Alibabas Tongyi Qianwen (roughly translated as truth from a thousand questions).
Moreover, a recent assessment by NewsGuard an internet trust tool established and maintained by a large team of respected Western journalists found that OpenAIs ChatGPT-3.5 generated far more false, or hallucinogenic, information in Chinese than it did in English.
The literary scholar Jing Tsus remarkable bookKingdom of Characters: The Language Revolution that Made China Modern underscores the critical role that language has played in Chinas evolution since 1900. In the end, language is nothing more than a medium of information, and in her final chapter, Tsu seizes on that point to argue that Whoever controls information controls the world.
In the age of AI, that conclusion raises profound questions for China. Information is the raw fuel of large language AI models. But state censorship encumbers China with small language models. This distinction could well bear critically on the battle for information control and global power.
Go here to read the rest:
The AI Moment of Truth for Chinese Censorship by Stephen S. Roach - Project Syndicate
Elon Musk on 2024 Politics, Succession Plans and Whether AI Will … – The Wall Street Journal
This transcript was prepared by a transcription service. This version may not be in its final form and may be updated.
Ryan Knutson: Since Elon Musk bought Twitter last fall, advertisers have abandoned it in droves, thousands of workers have been laid off and Twitter has lost hundreds of millions of dollars. Yesterday Musk spoke with our colleague Thorold Barker about it.
Thorold Barker: Do you regret buying it? You tried to get out of it. Or are you now happy you bought it?
Elon Musk: Well, all's well that ends well.
Thorold Barker: Has it ended well yet? Or we still got to wait and see?
Elon Musk: I think we're on the, hopefully on the comeback arc.
Thorold Barker: Okay.
Ryan Knutson: As part of its so-called comeback, Musk says he wants Twitter to become more of a town square. For instance, tonight he's planning to go live on Twitter with Florida's Republican governor Ron DeSantis.
Elon Musk: We'll be interviewing Ron DeSantis and he has quite an announcement to make. So it's going to be live and let her rip. Let's see what happens.
Ryan Knutson: DeSantis is expected to announce his bid for President. Musk talked about this at a Wall Street Journal conference. He also talked about future plans for Twitter, his views on politics, and how artificial intelligence will transform our lives. Welcome to The Journal, our show about money, business and power. I'm Ryan Knutson. It's Wednesday, May 24th. Coming up on the show, a conversation with Elon Musk.
Thorold Barker: Elon. Welcome.
Elon Musk: Hi.
Thorold Barker: You in Palo Alto? I understand.
Elon Musk: Yeah, I'm at, well, a global engineering headquarters in Palo Alto.
Thorold Barker: Great. Well thank you so much for joining us.
Ryan Knutson: Elon Musk spoke with our colleague Thorold Barker at The Wall Street Journals CEO Council Summit. The conference was held in London and business leaders talked about things like economics, geopolitics, and artificial intelligence. One of the first things Musk and Barker discussed was US politics. Musk has become a more popular figure on the right in recent years. Tucker Carlson decided to host his show on Twitter after his ouster from Fox News. And now Musk is doing that interview with Ron DeSantis.
Thorold Barker: What should we be thinking about, who you're backing? Obviously this interview tells us something. Can you give us a sense of where your thinking is at the moment?
Elon Musk: Yes, I mean, I'm not at this time planning to endorse any particular candidate, but I am interested in X slash Twitter being somewhat of a public town square and where more and more organizations host content and make announcements on Twitter.
Ryan Knutson: By the way, Musk says he wants to transform Twitter into a super app with things like payments and commerce and he's been referring to that as X.
Thorold Barker: And should we expect, sorry, I don't want to go on too long about this, but in your new role as interviewer rather than interviewee, should we expect more of this? I mean if it's the town square, are you going to be interviewing other candidates, democrats, what's your thought of this? If people are willing to come, are you going to be there to,
Elon Musk: Yes.
Thorold Barker: Execute the town square across the spectrum?
Elon Musk: Yes, absolutely. I do think it's important that Twitter be, have both the reality and the perception of level playing field of a place where voices are heard and where there's the kind of dynamic interaction that you don't really see anywhere else.
Thorold Barker: Can you just talk a little bit about what are the key issues that really matter for you at this pivotal moment?
Elon Musk: You mean matter for me as an individual or?
Thorold Barker: Matter for you as an individual in terms of who leads the country, but also more broadly than that for the country and for your businesses? I mean, can you give your sense of where the real issues lie here?
Elon Musk: Well, I've said publicly that my preference and I think would be the preference of most Americans is really to have someone fairly normal in office. I think we'd all be quite happy with that actually. I think someone that is representative of the moderate views that I think most of the country holds in reality. But the way things are set up is that we do have a system that seems to push things towards the edges because of the primaries. So in order to win the primary, you've got to win obviously majority of your party's vote. In both cases that tends to cause the swing to the left and the right.
Thorold Barker: So if we go through the four names in the frame at the moment, can you just give us sort of yes, no and whether they're normal and sensible. So we've got Joe Biden.
Elon Musk: I mean, I think I need to be careful about these statements so I would maybe have to have a few drinks before I would give you the answers to all of them.
Thorold Barker: I will look forward to that and I look forward to...
Ryan Knutson: Musk doesn't always hold back his opinions though, and his views have often drawn criticism. For instance, recently he tweeted that billionaire and progressive donor, George Soros wants to "Erode the very fabric of society" and the quote, "Soros hates humanity."
Thorold Barker: You are obviously a big figure on Twitter and you're setting a tone and an aim. So I'm just curious as to whether that sort of debate which gets triggered, does that fit into the definition that you're trying to create in that new town square?
Elon Musk: Look, what I say is what I say. I'm not going to mitigate what I say because that would be inhibiting freedom of speech. That doesn't mean you have to agree with what I say. Nor does it mean if somebody says the total opposite that they would be supported on Twitter. They are. The point is to have a divergent set of views and free speech is only relevant if it's a speech by someone you don't like who says something you don't like, is that allowed? If so, you have free speech, otherwise you do not.
Thorold Barker: Can I just move on quickly to, because I don't want to go too far down that rabbit hole because that debate has played out on Twitter a bit is, are you back near profitability now?
Elon Musk: Twitter is not quite there, but we're not like when acquisition closed, I would say it's analogous to being teleported into a plane that's plunging to the ground with its engines on fire and the controls don't work. So discomforting to say the least. Now we have to do some pretty heavy-handed (inaudible) cutting company healthy, but we're at this point we're trending towards, if we get lucky, we might be cash positive next month, but it remains to be seen.
Thorold Barker: Okay. So I mean, one of the things you have talked about, you bought it for 44 billion. You've talked about it one day being worth 250 I think in internal meetings. Can you just talk about how you get there? What is the bigger vision? I mean, you want to bring back advertisers now and are they coming back by the way?
Elon Musk: Yeah.
Thorold Barker: Yeah. Can you give any idea of the scale of the comeback in terms of who you lost and who's coming back?
Elon Musk: Well, I think it'll be very significant. So the advertising agencies, this point of all lifted their warnings on Twitter, and so I think at this point I expect almost all advertisers to return.
Thorold Barker: Okay. You're running three very big companies. You have very big stakes and ownership control of two of those at least. What is your succession plan?
Elon Musk: Yeah, succession is one of the toughest age-old problems. It's plagued countries, kings and CEOs since the dawn of history. There is no obvious solution. I mean there are particular individuals identified as, that I've told the board, look, if something happens to me unexpectedly, this is my recommendation for taking over. So in all cases, the board is aware of who my recommendation is. It's up to them. They may choose to go different direction, but there is in worst case scenario, this is who should run the company.The control question is a much tougher question and something that I'm wrestling with and I'm frankly open to ideas because it certainly is true that the companies that I have created and are creating collectively possess immense capability. And so the stewardship of them is incredibly important. I'm definitely not of the school of automatically giving my kids some share of the companies, even if they have no interest or inclination or ability to manage the companies. I think that's a mistake.
Ryan Knutson: Coming up Elon Musk on whether artificial intelligence will annihilate humanity. Elon Musk has been involved with artificial intelligence projects for years. He was one of the founders of OpenAI, the company that launched ChatGPT, the chatbot with the uncanny ability to produce sophisticated answers. Tesla uses AI in its advanced driver assistance system, and Musk also just founded X.AI, a new AI startup, but for years he's also been sounding alarms about the dangers of AI and he signed a letter with some other tech leaders calling for a pause in AI development.
Thorold Barker: You've talked about the importance of regulation and you called for this moratorium. I mean the history of regulating tech has been checkered. It's been very hard for regulators to keep up with tech, let alone get ahead of it. What do you think actually needs to happen that practically could in this space to try to change that? Because obviously the history of this is not encouraging.
Elon Musk: Yeah. I mean I think should be, I've been pushing hard for a long time. I met with a number of senior senator and Congress, people of Congress in the White House to advocate for AI regulation, starting with an insight committee that is formed of independent parties as well as perhaps participants from the leaders in industry. But anyway, you figure out some sort of regulatory board and they start off gaining insight and then have proposed rulemaking and then we'll get commented on by industry. And then hopefully we have some sort of oversight rules that improve safety just as we do with aircraft, with the FAA and spacecraft and cars with NHTSA and food and drugs with the Food and Drug Administration.
Thorold Barker: Couple of things I just wanted to go into on AI, which I would love your perspective on. What does it mean for society in terms of is this going to embed wealth and power in a very small subset and create a big widening of inequality? Is it going to democratize and create the opposite? What is your sense of where this heads?
Elon Musk: In terms of access to goods and services, I think AI will be ushering a age of abundance. Assuming that we're in a benign AI scenario. I think the AI will be able to make goods and services very inexpensively.
Thorold Barker: And in the unbenign scenario?
Elon Musk: Well, there's a wide range of,
Thorold Barker: But what's the thing that you are most worried about? When you've been talking for years about the need for regulation, what is the scenario that really keeps you up at night?
Elon Musk: Well, I don't think the AI is going to try to destroy all humanity, but it might put us under strict controls and there's no non-zero chance of it going Terminator. It's not 0%, but I think it's a small likelihood of annihilating humanity, but it's not zero. We wanted that (inaudible) to be zeros, close to zero as possible. And then like I said, of AI, assuming control for the safety of all the humans and taking over all the computing systems and weapon systems of earth and effectively being some sort of uber nanny.
Thorold Barker: But isn't the more lightly nasty outcome that rather than AI taking over and being the ultimate nanny that keeps us all doing stuff that is super safe and it wants us to, that actually somebody nefariously harnesses that power to achieve societal control, stroke military superiority, and that actually some country around the world decides to use it in a different way.
Elon Musk: Yeah. That's what I mean by AI uses as a weapon and the pen is mightier than the sword. So one of the first places we have to be careful of AI being used is in social media to manipulate public opinion. So the reason that Twitter is going to a primarily subscriber based system is because it is dramatically harder to create. It's like quote 10,000 times harder to create an account that has a verified phone number from a credible carrier, that has a credit card and that pays a small amount of money per month. So whereas in the past someone could create a million fake accounts for a penny of peace and then manipulate, have something appear to be very much liked by the public when in fact it is not, or promoted and retweeted when in fact it is not. This popularity is not real and essentially gain the system.
Thorold Barker: So if we take it back to where we started, if you look at the election that's coming up, how big a role will this big shift in AI capability over the last few months, which will obviously continue through the next year, how big an impact is this going to play, do you think in the messaging and the way that people get told the different pitches of the candidates?
Elon Musk: I think that's something we need to go and look at in a big way is to make sure that we're minimizing the impact of AI manipulation.
Thorold Barker: Okay, but beyond Twitter, are you worried about this for the election in general?
Elon Musk: Yeah, there probably will be attempts to use AI to manipulate the public and some of it will be successful and if not this election, for sure the next one.
Thorold Barker: We talk a lot in terms of AI about the next five to 10 years and what the impact is going to be on jobs and some of these things. If you look out on a much longer timeframe, given the speed and scale of the change and you look to your grandkids and great grandkids, can you just give us a sense of what it is going to be like to be human? How much is this going to change the fundamental nature of how we operate as a race at this point?
Elon Musk: I think it's going to change a lot, especially if you go further out into the future. I mean there will be, everything will be automatic. I mean there'll be household robots that you can fully talk to as though there are people that can help you around the house. There'll be a companion or whatever the case may be. There will be humanoid robots throughout factories and cars will also be all automatic and anything where intelligence can be applied, even moderate intelligence will be automated. So if you say like 10, 20 years from now.
Thorold Barker: Okay. But the actual broad thrust of, I mean jobs will change, but it'll be more AI enabling and making it better and easier rather than wholesale complete change of the skills you need.
Elon Musk: I mean, it depends about what timeframe we're talking about here. So if you say over 2030 year timeframe, I think things will be transformed beyond belief. You probably won't recognize society in 30 years. I do think we're fairly close. You asked me about artificial general intelligence. I think we're perhaps only three years, maybe six years away from it, this decade. So in fact, arguably, we are on the event horizon of the black hole that is artificial super intelligence.
Ryan Knutson: That's all for today, Wednesday, May 24th. The Journal is a co-production of Gimlet and The Wall Street Journal. If you like our show, follow us on Spotify or wherever you get your podcasts. We're out every weekday afternoon. Thanks for listening. See you tomorrow.
See the article here:
Elon Musk on 2024 Politics, Succession Plans and Whether AI Will ... - The Wall Street Journal
How Microsoft Swallowed Its Pride to Make a Massive Bet on OpenAI – The Information
Satya Nadella didnt want to hear it.
Last December, Peter Lee, who oversees Microsofts sprawling research efforts, was briefing Nadella, Microsofts CEO, and his deputies about a series of tests Microsoft had conducted of GPT-4, the then-unreleased new artificial intelligence large-language model built by OpenAI. Lee told Nadella Microsofts researchers were blown away by the models ability to understand conversational language and generate humanlike answers, and they believed it showed sparks of artificial general intelligencecapabilities on par with those of a human mind.
But Nadella abruptly cut off Lee midsentence, demanding to know how OpenAI had managed to surpass the capabilities of the AI project Microsofts 1,500-person research team had been working on for decades. OpenAI built this with 250 people, Nadella said, according to Lee, who is executive vice president and head of Microsoft Research. Why do we have Microsoft Research at all?
Excerpt from:
How Microsoft Swallowed Its Pride to Make a Massive Bet on OpenAI - The Information
Parrots, paper clips and safety vs. ethics: Why the artificial intelligence debate sounds like a foreign language – CNBC
Sam Altman, chief executive officer and co-founder of OpenAI, speaks during a Senate Judiciary Subcommittee hearing in Washington, DC, US, on Tuesday, May 16, 2023. Congress is debating the potential and pitfalls of artificial intelligence as products like ChatGPT raise questions about the future of creative industries and the ability to tell fact from fiction.
Eric Lee | Bloomberg | Getty Images
This past week, OpenAI CEO Sam Altman charmed a room full of politicians in Washington, D.C., over dinner, then testified for about nearly three hours about potential risks of artificial intelligence at a Senate hearing.
After the hearing, he summed up his stance on AI regulation, using terms that are not widely known among the general public.
"AGI safety is really important, and frontier models should be regulated," Altman tweeted. "Regulatory capture is bad, and we shouldn't mess with models below the threshold."
In this case, "AGI" refers to "artificial general intelligence." As a concept, it's used to mean a significantly more advanced AI than is currently possible, one that can do most things as well or better than most humans, including improving itself.
"Frontier models" is a way to talk about the AI systems that are the most expensive to produce and which analyze the most data. Large language models, like OpenAI's GPT-4, are frontier models, as compared to smaller AI models that perform specific tasks like identifying cats in photos.
Most people agree that there need to be laws governing AI as the pace of development accelerates.
"Machine learning, deep learning, for the past 10 years or so, it developed very rapidly. When ChatGPT came out, it developed in a way we never imagined, that it could go this fast," said My Thai, a computer science professor at the University of Florida. "We're afraid that we're racing into a more powerful system that we don't fully comprehend and anticipate what what it is it can do."
But the language around this debate reveals two major camps among academics, politicians, and the technology industry. Some are more concerned about what they call "AI safety." The other camp is worried about what they call "AI ethics."
When Altman spoke to Congress, he mostly avoided jargon, but his tweet suggested he's mostly concerned about AI safety a stance shared by many industry leaders at companies like Altman-run OpenAI, Google DeepMind and well-capitalized startups. They worry about the possibility of building an unfriendly AGI with unimaginable powers. This camp believes we need urgent attention from governments to regulate development an prevent an untimely end to humanity an effort similar to nuclear nonproliferation.
"It's good to hear so many people starting to get serious about AGI safety," DeepMind founder and current Inflection AI CEO Mustafa Suleyman tweeted on Friday. "We need to be very ambitious. The Manhattan Project cost 0.4% of U.S. GDP. Imagine what an equivalent programme for safety could achieve today."
But much of the discussion in Congress and at the White House about regulation is through an AI ethics lens, which focuses on current harms.
From this perspective, governments should enforce transparency around how AI systems collect and use data, restrict its use in areas that are subject to anti-discrimination law like housing or employment, and explain how current AI technology falls short. The White House's AI Bill of Rights proposal from late last year included many of these concerns.
This camp was represented at the congressional hearing by IBM Chief Privacy Officer Christina Montgomery, who told lawmakers believes each company working on these technologies should have an "AI ethics" point of contact.
"There must be clear guidance on AI end uses or categories of AI-supported activity that are inherently high-risk," Montgomery told Congress.
See also: How to talk about AI like an insider
It's not surprising the debate around AI has developed its own lingo. It started as a technical academic field.
Much of the software being discussed today is based on so-called large language models (LLMs), which use graphic processing units (GPUs) to predict statistically likely sentences, images, or music, a process called "inference." Of course, AI models need to be built first, in a data analysis process called "training."
But other terms, especially from AI safety proponents, are more cultural in nature, and often refer to shared references and in-jokes.
For example, AI safety people might say that they're worried about turning into a paper clip. That refers to a thought experiment popularized by philosopher Nick Bostrom that posits that a super-powerful AI a "superintelligence" could be given a mission to make as many paper clips as possible, and logically decide to kill humans make paper clips out of their remains.
OpenAI's logo is inspired by this tale, and the company has even made paper clips in the shape of its logo.
Another concept in AI safety is the "hard takeoff" or "fast takeoff," which is a phrase that suggests if someone succeeds at building an AGI that it will already be too late to save humanity.
Sometimes, this idea is described in terms of an onomatopeia "foom" especially among critics of the concept.
"It's like you believe in the ridiculous hard take-off 'foom' scenario, which makes it sound like you have zero understanding of how everything works," tweeted Meta AI chief Yann LeCun, who is skeptical of AGI claims, in a recent debate on social media.
AI ethics has its own lingo, too.
When describing the limitations of the current LLM systems, which cannot understand meaning but merely produce human-seeming language, AI ethics people often compare them to "Stochastic Parrots."
The analogy, coined by Emily Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell in a paper written while some of the authors were at Google, emphasizes that while sophisticated AI models can produce realistic seeming text, the software doesn't understand the concepts behind the language like a parrot.
When these LLMs invent incorrect facts in responses, they're "hallucinating."
One topic IBM's Montgomery pressed during the hearing was "explainability" in AI results. That means that when researchers and practitioners cannot point to the exact numbers and path of operations that larger AI models use to derive their output, this could hide some inherent biases in the LLMs.
"You have to have explainability around the algorithm," said Adnan Masood, AI architect at UST-Global. "Previously, if you look at the classical algorithms, it tells you, 'Why am I making that decision?' Now with a larger model, they're becoming this huge model, they're a black box."
Another important term is "guardrails," which encompasses software and policies that Big Tech companies are currently building around AI models to ensure that they don't leak data or produce disturbing content, which is often called "going off the rails."
It can also refer to specific applications that protect AI software from going off topic, like Nvidia's "NeMo Guardrails" product.
"Our AI ethics board plays a critical role in overseeing internal AI governance processes, creating reasonable guardrails to ensure we introduce technology into the world in a responsible and safe manner," Montgomery said this week.
Sometimes these terms can have multiple meanings, as in the case of "emergent behavior."
A recent paper from Microsoft Research called "sparks of artificial general intelligence" claimed to identify several "emergent behaviors" in OpenAI's GPT-4, such as the ability to draw animals using a programming language for graphs.
But it can also describe what happens when simple changes are made at a very big scale like the patterns birds make when flying in packs, or, in AI's case, what happens when ChatGPT and similar products are being used by millions of people, such as widespread spam or disinformation.
Read more from the original source:
Where AI evolves from here – Axios
Illustration: Ada Amer/Axios
Microsoft researchers say the latest model of OpenAI's GPT "is a significant step towards AGI" artificial general intelligence, the longtime grail for AI developers.
The big picture: If you think of AI as a technology ascending (or being pushed up) a ladder, Microsoft's paper claims that GPT-4 has climbed several rungs higher than anyone thought.
Driving the news: Microsoft released the "Sparks of Artificial General Intelligence" study in March, and it resurfaced in a provocative New York Times story Tuesday.
Catch up quick: Three key terms to understand in this realm are generative AI, artificial general intelligence (AGI), and sentient AI.
GPT-4, ChatGPT, Dall-E and the other AI programs that have led the current industry wave are all forms of generative AI.
AGI has a variety of definitions, all centering on the notion of human-level intelligence that can evaluate complex situations, apply common sense, and learn and adapt.
Many experts, like Microsoft's authors, see a clear path from the context-awareness of today's generative AI to building a full AGI.
Beyond the goal of AGI lies the more speculative notion of "sentient AI," the idea that these programs might cross some boundary to become aware of their own existence and even develop their own wishes and feelings.
Virtually no one else is arguing that ChatGPT or any other AI today has come anywhere near sentience. But plenty of experts and tech leaders think that might happen someday, and that there's a slim chance such a sentient AI could go off the rails and wreck the planet or destroy the human species.
Our thought bubble: The questions these categories raise divide people into two camps.
The bottom line: For help navigating this landscape, you're likely to find as much value in the science fiction novels of Philip K. Dick as in the day's news.
See the original post:
Amid job losses and fears of AI take-over, more tech majors are joining Artificial Intelligence race – The Tribune India
Tribune Web Desk
Vibha Sharma
Chandigarh, May 22
Amid warnings regarding its disastrous effect on the job market and catastrophic effects on the human race, the buzz is that more companies are joining the ongoing Artificial Intelligence race in the world.
They include tech major Apple Inc which is said to be launching its own version of popular chatbot ChatGPT. According to reports, Apple, which recently banned employees from using OpenAIs ChatGPT, is hiring for positions across machine learning and AI in the company.
Though AI takeover (a hypothetical scenario in with AI becomes the dominant form of intelligence, controlling the planet as also the human species) is still a hypothetical scenario, experts are warning about data privacy, centralisation of power in a handful of companies and AGI (artificial general intelligence) surpassing human cognitive ability.
After all, AI is also a popular theme in science fiction movies, highlighting benefits and dangers, including the possibility of machines taking over the world and the human race.
According to the Twitter Bio of OpenAIthe AI research and deployment company which developed ChatGPTtheir mission is to ensure that artificial general intelligence benefits all of humanity.
While those in favour of AI endorse the feeling, an equal number also advise caution, saying that those in power are not prepared for what may be coming.
Is 'humanity sleepwalking into a catastrophe'
While there has been an increase in AI products for general consumer use, including from tech giants like Google and Microsoft, the job scenario in tech companies is not so encouraging.
According to Layoffs.fyi, 696 tech companies laid off as many as 1,97,985 employees as on date, this year. The data published by the website tracking layoffs in the tech industry around March stated that 454 tech companies had laid off 1,23,882 employees since the beginning of 2023.
The emergence and subsequent popularity of ChatGPT type AI show that the day is not far when thousands of jobs related to research, coding, writing, human resources, etc, may become redundant but there are other fears as well, including from job-seekers.
According to job advice platform, Resumebuilder.com, Many employers now use applicant tracking system (ATS) software to automate the initial stage of the hiring process. If the formatting of your resume isnt optimised for such software, it might get filtered out before it even reaches the person who decides whether or not you get an interview.
ChatGPT is not the only AI in market
ChatGPT, which crossed the one million user mark in just five days after it was made public in November 2022, currently has over 100 million users. Its website currently generates 1.8 billion visitors per month, and it is not the only one in the market.
Currently, there are several alternatives with different features and benefits to rival the tool owned and developed by OpenAI founded in December 2015 by a group which included multi-billionaire Elon Musk who is now warning against evils of AI.
Recently, an open letter signed by whos who of the tech world, including Musk, had called for a six-month pause in the out of control race for AI development, warning of its profound risks to society and humanity.
The letter came after the public release of GPT-4.
Though Musk is said to be involved in several tech/AI companies, he warned that AI could lead to civilisation destruction.
AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production, in the sense that it is, it has the potential however small one may regard that probability, but it is non-trivial it has the potential of civilisation destruction, Musk was quoted as saying in the interview with Tucker Carlson, an American political commentator.
#Apple Inc #Artificial Intelligence #ChatGPT
Continued here:
Artificial Intelligence May Be ‘Threat’ to Human Health, Experts Warn – HealthITAnalytics.com
May 19, 2023 -In a recent analysis published in BMJ Global Health, an international group of researchers and public health experts have argued that artificial intelligence (AI) and artificial general intelligence (AGI) may pose numerous threats to human health and well-being, calling for research into these technologies to be halted until they can be properly regulated.
The authors noted that AI technology has various promising applications in healthcare, but posit that misuse of these solutions could harm human health through their impact on social, economic, political, and security-related determinants of health.
The research and development of healthcare AI are progressing rapidly, the authors stated, highlighting that much of the literature examining these tools is focused on the potential benefits gained through their implementation and use. Conversely, discussions about the potential harms of these technologies are often limited to looking at the misapplication of AI in the clinical setting.
However, AI could negatively impact upstream determinants of health, characterized by the American Medical Association (AMA) as individual factors that may seem unrelated to health on the surface, but actually have downstream impacts on patients long-term health outcomes.
The AMA indicates that these upstream factors, such as living conditions or social and institutional inequities, have not always been within the scope of public health research but can exacerbate disease incidence, injury rates, and mortality.
READ MORE: Arguing the Pros and Cons of Artificial Intelligence in Healthcare
The authors argued that the potential misuse and ongoing failure to anticipate, adapt to, and regulate AIs impacts on society could negatively affect these factors and cause harm.
The analysis identified three impacts AI could have on upstream and social determinants of health (SDOH) that could result in threats to human health: the manipulation and control of people, the proliferation of lethal autonomous weapons systems (LAWS), and the potential obsolescence of human labor.
The first threat, the authors explained, results from AIs ability to process and analyze large datasets containing sensitive or personal information, including images. This ability could enable the misuse of AI solutions in order to develop highly personalized, targeted marketing campaigns or significantly expand surveillance systems.
These could be used with good intentions, the authors noted, such as countering terrorism, but could also be used to manipulate individual behavior, citing cases of AI-driven subversion of elections across the globe and AI-driven surveillance systems that perpetuate inequities by using facial recognition and big data to produce assessments of individual behavior and trustworthiness.
The second threat is related to the development and use of LAWS, which can locate, select, and engage human targets without supervision. The authors pointed out that these can be attached to small devices like drones and easily mass-produced, providing bad actors with the ability to kill at an industrial scale.
READ MORE: Precision Oncology Data Registry May Perpetuate Health Disparities
The third threat is concerned with how AI may make human jobs and labor obsolete. The authors acknowledged that AI has the potential to help perform jobs that are repetitive, unpleasant, or dangerous, which comes with some benefits to humans. However, they noted that currently, increased automation has largely served to contribute to inequitable wealth distribution and could exacerbate the adverse health effects associated with unemployment.
In addition, the authors described how AGI could pose an existential threat to humanity.
We are now seeking to create machines that are vastly more intelligent and powerful than ourselves, they said. The potential for such machines to apply this intelligence and powerwhether deliberately or notin ways that could harm or subjugate humansis real and has to be considered.
They highlighted that AGIs connection to the internet and the real world, including robots, vehicles, digital systems that help run various aspects of society, and weapons, could be the biggest event in human history, for the benefit of humanity or to its detriment.
Because of the scale of these potential threats and the significant impacts they could have on human health, the authors stated that healthcare professionals have a critical role to play in raising awareness around the risks of AI. Further, the authors argued for the prohibition of certain types of AI and joined calls for a moratorium on AGI development.
With exponential growth in AI research and development, the window of opportunity to avoid serious and potentially existential harms is closing. The future outcomes of the development of AI and AGI will depend on policy decisions taken now and on the effectiveness of regulatory institutions that we design to minimise risk and harm and maximise benefit, they wrote.
See more here:
Artificial Intelligence May Be 'Threat' to Human Health, Experts Warn - HealthITAnalytics.com
Artificial intelligence: World first rules are coming soon are you … – JD Supra
The EUs AI Act
The European Commission first released its proposal for a Regulation on Artificial Intelligence (the AI Act) on 21 April 2021. It is intended to be the first legislation setting out harmonised rules for the development, placing on the market, and use of AI in the European Union. The exact requirements (that mainly revolve around data quality, transparency, human oversight and accountability) depend on the risk classification of the AI in question, which ranges from a high to low and minimal risk, while a number of AI uses are prohibited outright. Given that the AI Act is expected to be a landmark piece of EU legislation that will have extraterritorial scope and will be accompanied with hard hitting penalties (including potential fines of up to 30 million or 6% of worldwide annual turnover), we have been keeping a close eye on developments.
The latest development occurred on 11 May 2023, with Members of the European Parliament (MEPs) committees voting in favour of certain proposed amendments to the original text of the AI Act. Some of the key amendments include:
General AI principles: New provisions containing general AI principles have been introduced. These are intended to apply to all AI systems, irrespective of whether they are high-risk, thereby significantly expanding the scope of the application of the AI Act. At the same time, MEPs expanded the classification of high-risk uses to include those that may result in harm to peoples health, safety, fundamental rights or the environment. Particularly interesting is the addition of AI in recommender systems used by social media platforms (with more than 45 million users under the EUs Digital Services Act) to the high-risk list.
Prohibited AI practices: As part of the amendments, MEPs substantially amended the unacceptable risk / prohibited list to include intrusive and discriminatory uses of AI systems. Such bans now extend to a number of uses of biometric data, including indiscriminate scraping of biometric data from social media to create facial recognition databases.
Foundation models: While past versions of the AI Act have predominantly focused on 'high-risk' AI systems, MEPs introduced a new framework for all foundation models. Such framework, (which would, (among other things), require providers of foundation models to guarantee robust protection of fundamental rights, health and safety and the environment, democracy and rule of law), would particularly impact providers and users of generative AI. Such providers would also need to assess and mitigate risks, comply with design, information and environmental requirements and register in the applicable EU database, while generative foundation models would also have to comply with additional transparency requirements.
User obligations: 'Users' of AI systems are now referred to as 'deployers' (a welcome change given that the previous term somewhat confusingly was not intended to capture the end user). This change means deployers become subject to an expanded range of obligations, such as the duty to undertake a wide-ranging AI impact assessment, while on the other hand, end user rights are boosted, with end users now being conferred the right to receive an explanation about decisions made by high-risk AI systems.
The next step, plenary adoption, is currently scheduled to take place in June 2023. Following this, the proposal will enter the last stage of the legislative process, and negotiations between the European Council and the European Commission on the final form of the AI Act will begin.
However, even if these timelines are adhered to, the traction that AI regulation has been receiving in recent times may mean that the EUs AI Act is not the first ever legislation in this area. Before taking a look at the developments in this sphere occurring in the UK, lets consider why those involved in the supply of products need to have AI regulation on their radar in the first place.
The uses of AI are endless. Taking inspiration from a report issued by the UKs Office for Product Safety and Standards last year, we see AI in the product development space as having the potential to lead to:
Safer product design:AI can be used to train algorithms to develop only safe products and compliant solutions.
Enhanced consumer safety and satisfaction: Data collected with the support of AI can allow manufacturers to incorporate a consumers personal characteristics and preferences in the design process of a product, which can help identify the products future use and ensure it is designed in a way conducive to this.
Safer product assembly: AI tools such as visual recognition can assist with conducting quality inspections along the supply chain, ensuring all of the parts and components being assembled are safe - leaving little room for human error.
Prevention of mass product recalls: Enhanced data collection via AI during industrial assembly can enable problems which are not easy to identify through manual inspections to be detected, thereby allowing issue-detection before products are sold.
Predictive maintenance: AI can provide manufacturers with critical information which allows them to plan ahead and forecast when equipment may fail so that repairs can be scheduled on time.
Safer consumer use: AI in customer services can also contribute to product safety through the use of virtual assistants answering consumer queries and providing recommendations on safe product usage.
Protection against cyber-attacks: AI can be leveraged to detect, analyse and prevent cyber-attacks that may affect consumer safety or privacy.
On the other hand, there are risks when using AI. In the products space, this could result in:
Products not performing as intended: Product safety challenges may result from poor decisions or errors made in the design and development phase. A lack of good data can also produce discriminatory results, particularly impacting vulnerable groups.
AI systems lacking transparency and explainability:A consumer may not know or understand when an AI system is in use and taking decisions, or how such decisions are being taken. Such lack of understanding can in turn affect the ability of those that have suffered harm to claim compensation given the difficulty in proving how the harm has come about. This is particularly a concern given product safety has traditionally envisaged risks to the physical health and safety of the end users while AI products pose risks of immaterial harms (such as psychological harm) or indirect harms from cyber security vulnerabilities.
Cyber security vulnerabilities being exploited:AI systems can be hacked and/or lose connectivity which may result in safety risks e.g. if a connected fire alarm loses connectivity, the consumer may not be warned if a fire occurs.
Currently, there is no overarching piece of legislation regulating AI in the UK. Instead, different regulatory bodies (e.g. the Medicines and Healthcare products Regulatory Agency, the Information Commissioners Office etc.) oversee AI use across different sectors, and where relevant, provide guidance on the same.
In September 2021 however, the UK government announced a 10-year plan, described as the National AI Strategy. The National AI Strategy aims to invest and plan for the long-term needs of the AI ecosystem, support the transition to an AI-enabled economy and ensure that the UK get the national and international governance of AI technologies right.
More recently, on 29 March 2023, the UK Government published its long-anticipated artificial intelligence white paper. Branding its proposed approach to AI regulation as world leading in a bid to turbocharge growth, the whitepaper provides a cross-sectoral, principles-based framework to increase public trust in AI and develop capabilities in AI technology. The five principles intended to underpin the UKs regulatory framework are:
1. Safety, security and robustness;
2. Appropriate transparency and explainability;
3. Fairness;
4. Accountability and governance; and
5. Contestability and redress.
The UK Government has said it would avoid "heavy-handed legislation" that could stifle innovation which means in the first instance at least, these principles will not be enforced using legislation. Instead, responsibility will be given to existing regulators to decide on "tailored, context-specific approaches" that best suit their sectors. The consultation accompanying the white paper is open until 21 June 2023.
However, this does not mean that no legislation in this arena is envisaged. For example:
On 4 May 2023, the Competition and Markets Authority (the CMA) announceda review of competition and consumer protection considerations in the development and use of AI foundation models. One of the intentions behind the review is to assist with the production of guiding principles for the protection of consumers and support healthy competition as technologies develop. A report on the findings is scheduled to be published in September 2023, and whether this will result in legislative proposals is yet to be seen.
The UK has, as of late, had a specific focus on IoT devices, following the passage of the UKs Product Security and Telecommunications Infrastructure Act in December 2022 and its recent announcement that the Product Security and Telecommunications Infrastructure (Product Security) Regime will come into effect on 29 April 2024. While IoT and AI devices of course differ, the UKs willingness to take a stance as a world leader in this space (being the first country in the world to introduce minimum security standards for all consumer products with internet connectivity) may mean that a similar focus on AI should be expected in the near future.
Our Global Products Law practice is fully across all aspects of AI regulation, product safety, compliance and potential liability risks. In part 2 of this article, we look to developments in France, the Netherlands and the US and share our thoughts around what businesses can do to get ahead of the curve to prepare for the regulation of AI around the world.
View original post here:
Artificial intelligence: World first rules are coming soon are you ... - JD Supra