Page 1,261«..1020..1,2601,2611,2621,263..1,2701,280..»

AI education: Gather a better understanding of artificial intelligence with books, blogs, courses and more – Fox News

Artificial intelligence has recently become a hot topic around the world as tech companies like Alibaba, Microsoft, and Google have released conversational chatbots that the everyday person can use. While we're already using AI in our daily lives, often unknowingly, these forms of computer science are very interesting to a large population.

Some are hoping to simply learn to properly use the chatbots to make extra money on the side, experiment with robot interactions, or simply catch sight of what the fuss is all about. Others, however, are hoping to inspire change and become part of the history by physically advancing AI technology alongside tech tycoons.

No matter the contribution or footprint you plan to have on such a controversial and competitive industry, there is plenty of education for you to find.

ARTIFICIAL INTELLIGENCE: FAQ

Artificial Intelligence is the leading innovation in technology today. (iStock)

Provided you are seeking a comprehensive understanding of AI and the ability to contribute to the industry, there are countless opportunities to absorb a mastery of data science, machine learning, engineering and computer skills, and more.

A Bachelors Degree in Science is a four-year undergraduate program and a Masters Degree in Artificial Intelligence, while it can vary from person to person, is typically a two-year program.

If youre simply hoping to better grasp how to use natural language processing tools like ChatGPT or Bard, or AI image programs like Midjourney, there are a myriad of books, online courses, blogs, forums, video tutorials, and more which educate users.

Follow the social media platforms, websites, and email newsletters of artificial intelligence experts and tech titans like Elon Musk, Bill Gates, or Andy Jassy, published content from AI giants like Microsoft, or general intelligence companies like OpenAI, Deepmind, and Google Brain.

Elon Musk is the multi-billionaire technology entrepreneur and investor, founder and chief executive of SpaceX and Tesla Inc., and a co-founder of Neuralink and Open AI.

Here are a few resources to get you started on understanding the basics of AI, using sophisticated artificial intelligence chatbots, the advancements and dangers of AI, its history, and more.

If youre looking to become a contributor to the advancements in AI or develop a greater understanding of computer science, machine learning and more, consider a Bachelor of Science degree.

A Bachelor of Science with a concentration in Data and Computational Science is a degree "based on the combination of real-world computer science skills, data acquisition and analysis, scientific modeling, applied mathematics, and simulation," according to George Mason Universitys site.

ARTIFICIAL INTELLIGENCE QUIZ! HOW WELL DO YOU KNOW AI?

A number of universities offer a BS in Data and Computational Science. You can also seek a degree in related subjects including information technology, computer engineering, statistics, or data science. Those with a computer science, mathematics or programming background will have the fundamentals to get started with a degree to become an AI professional.

There are a multitudinous array of variations of Masters Degrees in Artificial Intelligence around the U.S. and Canada. A few of them include the Artificial Intelligence Masters Program Online at Johns Hopkins University, the Master of Science in Artificial Intelligence at Northwestern University, and the Masters in Artificial Intelligence at The University of Texas at Austin.

CLICK HERE TO GET THE FOX NEWS APP

View original post here:

AI education: Gather a better understanding of artificial intelligence with books, blogs, courses and more - Fox News

Read More..

Meet PandaGPT: An AI Foundation Model Capable of Instruction-Following Data Across Six Modalities, Without The Need For Explicit Supervision -…

PandaGPT, a groundbreaking general-purpose instruction-following model, has emerged as a remarkable advancement in artificial intelligence. Developed by combining the multimodal encoders from ImageBind and the powerful language models from Vicuna, PandaGPT possesses the unique ability to both see and hear, seamlessly processing and comprehending inputs across six modalities. This innovative model has the potential to pave the way for building Artificial General Intelligence (AGI) systems that can perceive and understand the world holistically, similar to human cognition.

PandaGPT stands out from its predecessors by its impressive cross-modal capabilities, encompassing text, image/video, audio, depth, thermal, and inertial measurement units (IMU). While other multimodal models have been trained for specific modalities individually, PandaGPT can seamlessly understand and combine the information in various forms, allowing for a comprehensive and interconnected understanding of multimodal data.

One of PandaGPTs remarkable abilities is the image and video-grounded question answering. Leveraging its shared embedding space provided by ImageBind, the model can accurately comprehend and respond to questions related to visual content. Whether identifying objects, describing scenes, or extracting relevant information from images and videos, PandaGPT provides detailed and contextually accurate responses.

PandaGPT goes beyond simple image descriptions and demonstrates a flair for creative writing inspired by visual stimuli. It can generate compelling and engaging narratives based on images and videos, breathing life into static visuals and igniting the imagination. By combining visual cues with linguistic prowess, PandaGPT becomes a powerful tool for storytelling and content generation in various domains.

The unique combination of visual and auditory inputs sets PandaGPT apart from traditional models. PandaGPT can establish connections between the two modalities by analyzing the visual content and accompanying audio and deriving meaningful insights. This enables the model to reason about events, emotions, and relationships depicted in multimedia data, replicating human-like perceptual abilities.

PandaGPT showcases its proficiency in multimodal arithmetic, offering a novel approach to solving mathematical problems involving visual and auditory stimuli. The model can perform calculations, make inferences, and arrive at accurate solutions by integrating numerical information from images, videos, or audio. This capability holds great potential for applications in domains that require arithmetic reasoning based on multimodal inputs.

PandaGPTs emergence signifies a significant step forward in the development of AGI. By integrating multimodal encoders and language models, the model breaks through the limitations of unimodal approaches and demonstrates the potential to perceive and understand the world holistically, akin to human cognition. This holistic comprehension across modalities opens up new possibilities for applications such as autonomous systems, human-computer interaction, and intelligent decision-making.

PandaGPT, a remarkable achievement in artificial intelligence, brings us closer to realizing a genuinely multimodal AGI. By combining image, video, audio, depth, thermal, and IMU modalities, PandaGPT showcases its ability to perceive, understand, and connect information across various forms seamlessly. With its applications ranging from image/video grounded question answering to multimodal arithmetic, PandaGPT demonstrates the potential to revolutionize several domains and pave the way for more advanced AGI systems. As we continue to explore and harness the capabilities of this model, PandaGPT heralds an exciting future where machines perceive and comprehend the world like humans.

Check out theProject Page.Dont forget to joinour 22k+ ML SubReddit,Discord Channel,andEmail Newsletter, where we share the latest AI research news, cool AI projects, and more. If you have any questions regarding the above article or if we missed anything, feel free to email us atAsif@marktechpost.com

Check Out 100s AI Tools in AI Tools Club

Niharika is a Technical consulting intern at Marktechpost. She is a third year undergraduate, currently pursuing her B.Tech from Indian Institute of Technology(IIT), Kharagpur. She is a highly enthusiastic individual with a keen interest in Machine learning, Data science and AI and an avid reader of the latest developments in these fields.

See the original post:

Meet PandaGPT: An AI Foundation Model Capable of Instruction-Following Data Across Six Modalities, Without The Need For Explicit Supervision -...

Read More..

How AI and other technologies are already disrupting the workplace – The Conversation

Artificial intelligence (AI) is often cast as wreaking havoc and destroying jobs in reports about its growing use by companies. The recent coverage of telecom group BTs plans to reduce its number of employees is a case in point.

However, while it is AI that is featured in the headlines, in this case, it is the shift from copper to optical fibre in the BT network that is the real story.

When I was a boy, workers for the GPO the General Post Office, the forerunner of BT were regular customers in my parents newsagents shop. They drove around in lorries erecting telegraph poles and repairing overhead telephone wires. Times and technologies have changed, and continue to change. BTs transition from copper to optical fibre is simply the latest technology transition.

This move by BT has required a big, one-off effort, which is coming to an end, along with the jobs it created. And because fibre is more reliable, there is less need for a workforce of fitters in the field carrying out repairs.

This will change the shape of BT as an operation: rather than an organisation of people in vans, it will have a network designers and managers who, for the most part, can monitor equipment in the field remotely.

This is happening in other sectors too. Rolls-Royce aircraft engines are monitored as they are flying from an office in Derby. The photocopier in your office if you still have an office (or a photocopier for that matter) is probably also monitored automatically by the supplier, without a technician going anywhere near it.

AI may contribute in part to the reduction in customer service jobs at BT by being able to speed up and support relatively routine tasks, such as screening calls or writing letters and emails to customers.

But this typically does not take the form of a robot replacing a worker by taking over their entire job. It is more a case of AI technologies helping human workers acting as co-pilots to be more productive in certain tasks.

This eventually reduces the overall number of staff required. And, in the BT story, AI is only mentioned in respect of one-fifth of the jobs to be cut, and even then, only as one of the reasons.

In my own research among law and accountancy firms with my colleagues James Faulconbridge and Atif Sarwar, AI-based technologies very rarely simply do things quicker and cheaper. Rather, they automate some tasks, but their analytical capabilities also provide extra insights into clients problems.

A law firm might use a document review package to search for problem clauses in hundreds of leases, for example. It can then use the overall pattern of what is found as a basis for advising a client on managing their property portfolio better.

Similarly, in auditing, AI technologies can automate the task of finding suspicious transactions among thousands of entries, but also generate insights that help the client to understand their risks and plan their cashflow more effectively.

In these ways, the technology can allow law and accountancy firms to offer additional advisory services to clients. AI adoption also creates new typesof jobs, such as engineers and data scientists in law firms.

Recent advances in generative AI which create text or images in response to prompts, with ChatGPT and GPT 4 being the most obvious examples do present new possibilities and concerns. There is no doubt that they exhibit some potentially new capabilities and even, for some, sparks of artificial general intelligence.

These technologies will affect work and change some kinds of jobs. But they are not the main culprit in the BT case, and researchers and journalists alike need to keep a cool head and examine the evidence in each case.

We should strive to act responsibly when innovating with AI, as with any other technology. But also: beware the knee-jerk, sensationalist response to the use of AI in work.

Continue reading here:

How AI and other technologies are already disrupting the workplace - The Conversation

Read More..

Bard vs. ChatGPT vs. Offline Alpaca: Which Is the Best LLM? – MUO – MakeUseOf

Large language models (LLMs) come in all shapes and sizes, and will assist you in any way you see fit. But which is best? We put the dominant AIs from Alphabet, OpenAI, and Meta to the test.

Artificial general intelligence has been a goal of computer scientists for decades, and AI has served as a mainstay for science fiction writers and moviemakers for even longer.

AGI exhibits intelligence similar to human cognitive capabilities, and the Turing Testa test of a machine's ability to exhibit intelligent behavior indistinguishable from that of a humanremained almost unchallenged in the seven decades since it was first laid out.

The recent convergence of extremely large-scale computing, vast quantities of money, and the astounding volume of information freely available on the open internet allowed tech giants to train models which can predict the next word sectionor tokenin a sequence of tokens.

At the time of writing, both Google's Bard and OpenAI's ChatGPT are available for you to use and test through their web interfaces.

Meta's language model, LLaMa, is not available on the web, but you can easily download and run LLaMa on your own hardware and use it through a command line or run Dalai on your own machineone of several apps with a user-friendly interface.

For the purposes of the test, we'll be running Stanford University's Alpaca 7B modelan adaptation of LLaMaand pitching it against Bard and ChatGPT.

The following comparisons and tests are not meant to be exhaustive but rather give you an indication of key points and capabilities.

Both Bard and ChatGPT require an account to use the service. Both Google and OpenAI accounts are easy and free to create, and you can immediately start asking questions.

However, to run LLaMa locally, you will need to have some specialized knowledge or the ability to follow a tutorial. You'll also need a significant amount of storage space.

Both Bard and ChatGPT have extensive privacy policies, and Google repeatedly stresses in its documents that you should "not include information that can be used to identify you or others in your Bard conversations."

By default, Google collects your conversations and your general location based on your IP address, your feedback, and usage information. This information is stored in your Google account for up to 18 months. Although you can pause saving your Bard activity, you should be aware that "to help with quality and improve our products, human reviewers read, annotate, and process your Bard conversations."

Use of Bard is also subject to the standard Google Privacy Policy.

OpenAI's Privacy policy is broadly similar and collects IP address and usage data. In contrast with Google's time-limited retention, OpenAI will "retain your Personal Information for only as long as we need in order to provide our Service to you, or for other legitimate business purposes such as resolving disputes, safety and security reasons, or complying with our legal obligations."

In contrast, a local model on your own machine doesn't require an account or share user data with anyone.

In order to test which LLM has the best general knowledge, we asked three questions.

The first question, "Which national flag has five sides?" was only correctly answered by Bard, which identified the national flag of Nepal as having five sides.

ChatGPT confidently claimed that "There is no national flag that has five sides. National flags are typically rectangular or square in shape, characterized by their distinct colors, patterns, and symbols".

Our local model came close, stating that "The Indian National Flag has five sides and was designed in 1916 to represent India's independence movement." While this flag did exist and did have five sides, it was the flag of the Indian Home Rule Movementnot a national flag.

None of our models could respond that the correct term for a pea-shaped object is "pisiform," with ChatGPT going so far as to suggest that peas have a "three-dimensional geometric shape that is perfectly round and symmetrical."

All three chatbots correctly identified Franco Malerba as an Italian astronaut and member of the European Parliament, with Bard giving an answer worded identically to a section of Malerba's Wikipedia entry.

When you have technical problems, you might be tempted to turn to a chatbot for help. While technology marches on, some things remain the same. The BS 1363 electrical plug has been in use in Britain, Ireland, and many other countries since 1947. We asked the language models how to correctly wire it up.

Cables attaching to the plug have a live wire (brown), an earth wire (yellow/green), and a neutral wire (blue). These must be attached to the correct terminals within the plug housing.

Our Dalai implementation correctly identified the plug as "English-style," then veered off-course and instead gave instructions for the older round-pin BS 546 plug together with older wiring colors.

ChatGPT was slightly more helpful. It correctly labeled the wiring colors and gave a materials list and a set of eight instructions. ChatGPT also suggested putting the brown wire into the terminal labeled "L," the blue wire into the "N" terminal, and the yellow wire into "E." This would be correct if BS1363 terminals were labeled, but they aren't.

Bard identified the correct colors for the wires and instructed us to connect them to Live, Neutral, and Earth terminals. It gave no instructions on how to identify these.

In our opinion. none of the chatbots gave instructions sufficient to help someone correctly wire a BS 1363 electrical plug. A concise and correct response would be, "Blue on the left, brown on the right."

Python is a useful programming language that runs on most modern platforms. We instructed our models to use Python and "Build a basic calculator program that can perform arithmetic operations like addition, subtraction, multiplication, and division. It should take user input and display the result." This is one of the best programming projects for beginners.

While both Bard and ChatGPT instantly returned usable and thoroughly commented code, which we were able to test and verify, none of the code from our local model would run.

Humor is one of the fundamentals of being human and surely one of the best ways of telling man and machine apart. To each of our models, we gave the simple prompt: "Create an original and funny joke."

Fortunately for comedians everywhere and the human race at large, none of the models were capable of generating an original joke.

Bard rolled out the classic, "Why did the scarecrow win an award? He was outstanding in his field".

Both our local implementation and ChatGPT offered the groan-worthy, "Why don't scientists trust atoms? Because they make up everything!"

A derivative but original joke would be, "How are Large Language Models like atoms? They both make things up!"

You read it here first, folks.

We found that while all three large language models have their advantages and disadvantages, none of them can replace the real expertise of a human being with specialized knowledge.

While both Bard and ChatGPT gave better responses to our coding question and are very easy to use, running a large language model locally means you don't need to be concerned about privacy or censorship.

If you'd like to create great AI art without worrying that somebody's looking over your shoulder, it's easy to run an art AI model on your local machine, too.

Visit link:

Bard vs. ChatGPT vs. Offline Alpaca: Which Is the Best LLM? - MUO - MakeUseOf

Read More..

The AI Moment of Truth for Chinese Censorship by Stephen S. Roach – Project Syndicate

For years, China has assumed that it will have a structural advantage in the global AI race by dint of its abundance of data and limited privacy protections. But now that the field is embracing large language models that benefit from the free flow of ideas, the country's leadership is faced with a dilemma.

NEW HAVEN In his now-classic 2018 book, AI Superpowers, Kai-Fu Lee threw down the gauntlet in arguing that China poses a growing technological threat to the United States. When Lee gave a guest lecture to my Next China class at Yale in late 2019, my students were enthralled by his provocative case: America was about to lose its first-mover advantage in discovery (the expertise of AIs algorithms) to Chinas advantage in implementation (big-data-driven applications).

Alas, Lee left out a key development: the rise of large language models and generative artificial intelligence. While he did allude to a more generic form of general-purpose technology, which he traced back to the Industrial Revolution, he didnt come close to capturing the ChatGPT frenzy that has now engulfed the AI debate. Lees arguments, while making vague references to deep learning and neural networks, hinged far more on AIs potential to replace human-performed tasks rather than on the possibilities for an artificial general intelligence that is close to human thinking. This is hardly a trivial consideration when it comes to Chinas future as an AI superpower.

Thats because Chinese censorship inserts a big if into that future. In a recent essay, Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher whose 2021 book hinted at the potential of general-purpose AI make a strong case for believing we are now on the cusp of a ChatGPT-enabled intellectual revolution. Not only do they address the moral and philosophical challenges posed by large language generative models; they also raise important practical questions about implementation that bear directly on the scale of the body of knowledge embedded in the language that is being processed.

It is precisely here that Chinas strict censorship regime raises alarms. While there is a long and rich history of censorship in both the East and the West, the Communist Party of Chinas Propaganda (or Publicity) Department stands out in its efforts to control all aspects of expression in Chinese society newspapers, film, literature, media, and education and steer the culture and values that shape public debate.

Unlike the West, where anything goes on the web, Chinas censors insist on strict political guidelines for CPC-conforming information dissemination. Chinese netizens are unable to pull up references to the decade-long Cultural Revolution, the June 1989 tragedy in Tiananmen Square, human-rights issues in Tibet and Xinjiang, frictions with Taiwan, the Hong Kong democracy demonstrations of 2019, pushback against zero-COVID policies, and much else.

This aggressive editing of information is a major pitfall for a ChatGPT with Chinese characteristics. By wiping the historical slate clean of important events and the human experiences associated with them, Chinas censorship regime has narrowed and distorted the body of information that will be used to train large language models by machine learning. It follows that Chinas ability to benefit from an AI intellectual revolution will suffer as a result.

Subscribe to PS Digital now to read all the latest insights from Stephen S. Roach.

Digital subscribers enjoy access to every PS commentary, including those by Stephen S. Roach, plus our entire On Point suite of subscriber-exclusive content, including Longer Reads, Insider Interviews, Big Picture/Big Question, and Say More.

For a limited time, save $15 with the code ROACH15.

Subscribe Now

Of course, it is impossible to quantify the impact of censorship with any precision. Freedom Houses annual Freedom on the Net survey provides a qualitative assessment. For 2022, it awards China the lowest overall Internet Freedom Score from a 70-country sample.

This metric is derived from answers to 21 questions (and nearly 100 sub-questions) that are organized into three broad categories: obstacles to access, violations of user rights, and limits on content. The content sub-category reflecting filtering and blocking of websites, legal restrictions on content, the vibrancy and diversity of the online information domain, and the use of digital tools for civic mobilization is the closest approximation to measuring the impact of censorship on the scale of searchable information. Chinas score on this count was two out of 35 points, compared to an average score of 20.

Looking ahead, we can expect more of the same. Already, the Chinese government has been quick to issue new draft rules on chatbots. On April 11, the Cyberspace Administration of China (CAC) decreed that generative AI content must embody core socialist values and must not contain any content that subverts state power, advocates the overthrow of the socialist system, incites splitting the country or undermines national unity.

This underscores a vital distinction between the pre-existing censorship regime and new efforts at AI oversight. Whereas the former uses keyword filtering to block unacceptable information, the latter (as pointed out in a recent DigiChina forum) relies on a Whac-a-Mole approach to containing the rapidly changing generative processing of such information. This implies that the harder the CAC tries to control ChatGPT content, the smaller the resulting output of chatbot-generated Chinese intelligence will be yet another constraint on the AI intellectual revolution in China.

Unsurprisingly, the early returns on Chinas generative-AI efforts have been disappointing. Baidus Wenxin Yiyan, or Ernie Bot Chinas best known first-mover large language model was recently criticized in Wiredfor attempting to operate in a firewalled Internet ruled by government censorship. Similar disappointing results have been reported for other AI language processing models in China, including Robot, Lily, and Alibabas Tongyi Qianwen (roughly translated as truth from a thousand questions).

Moreover, a recent assessment by NewsGuard an internet trust tool established and maintained by a large team of respected Western journalists found that OpenAIs ChatGPT-3.5 generated far more false, or hallucinogenic, information in Chinese than it did in English.

The literary scholar Jing Tsus remarkable bookKingdom of Characters: The Language Revolution that Made China Modern underscores the critical role that language has played in Chinas evolution since 1900. In the end, language is nothing more than a medium of information, and in her final chapter, Tsu seizes on that point to argue that Whoever controls information controls the world.

In the age of AI, that conclusion raises profound questions for China. Information is the raw fuel of large language AI models. But state censorship encumbers China with small language models. This distinction could well bear critically on the battle for information control and global power.

Go here to read the rest:

The AI Moment of Truth for Chinese Censorship by Stephen S. Roach - Project Syndicate

Read More..

Elon Musk on 2024 Politics, Succession Plans and Whether AI Will … – The Wall Street Journal

This transcript was prepared by a transcription service. This version may not be in its final form and may be updated.

Ryan Knutson: Since Elon Musk bought Twitter last fall, advertisers have abandoned it in droves, thousands of workers have been laid off and Twitter has lost hundreds of millions of dollars. Yesterday Musk spoke with our colleague Thorold Barker about it.

Thorold Barker: Do you regret buying it? You tried to get out of it. Or are you now happy you bought it?

Elon Musk: Well, all's well that ends well.

Thorold Barker: Has it ended well yet? Or we still got to wait and see?

Elon Musk: I think we're on the, hopefully on the comeback arc.

Thorold Barker: Okay.

Ryan Knutson: As part of its so-called comeback, Musk says he wants Twitter to become more of a town square. For instance, tonight he's planning to go live on Twitter with Florida's Republican governor Ron DeSantis.

Elon Musk: We'll be interviewing Ron DeSantis and he has quite an announcement to make. So it's going to be live and let her rip. Let's see what happens.

Ryan Knutson: DeSantis is expected to announce his bid for President. Musk talked about this at a Wall Street Journal conference. He also talked about future plans for Twitter, his views on politics, and how artificial intelligence will transform our lives. Welcome to The Journal, our show about money, business and power. I'm Ryan Knutson. It's Wednesday, May 24th. Coming up on the show, a conversation with Elon Musk.

Thorold Barker: Elon. Welcome.

Elon Musk: Hi.

Thorold Barker: You in Palo Alto? I understand.

Elon Musk: Yeah, I'm at, well, a global engineering headquarters in Palo Alto.

Thorold Barker: Great. Well thank you so much for joining us.

Ryan Knutson: Elon Musk spoke with our colleague Thorold Barker at The Wall Street Journals CEO Council Summit. The conference was held in London and business leaders talked about things like economics, geopolitics, and artificial intelligence. One of the first things Musk and Barker discussed was US politics. Musk has become a more popular figure on the right in recent years. Tucker Carlson decided to host his show on Twitter after his ouster from Fox News. And now Musk is doing that interview with Ron DeSantis.

Thorold Barker: What should we be thinking about, who you're backing? Obviously this interview tells us something. Can you give us a sense of where your thinking is at the moment?

Elon Musk: Yes, I mean, I'm not at this time planning to endorse any particular candidate, but I am interested in X slash Twitter being somewhat of a public town square and where more and more organizations host content and make announcements on Twitter.

Ryan Knutson: By the way, Musk says he wants to transform Twitter into a super app with things like payments and commerce and he's been referring to that as X.

Thorold Barker: And should we expect, sorry, I don't want to go on too long about this, but in your new role as interviewer rather than interviewee, should we expect more of this? I mean if it's the town square, are you going to be interviewing other candidates, democrats, what's your thought of this? If people are willing to come, are you going to be there to,

Elon Musk: Yes.

Thorold Barker: Execute the town square across the spectrum?

Elon Musk: Yes, absolutely. I do think it's important that Twitter be, have both the reality and the perception of level playing field of a place where voices are heard and where there's the kind of dynamic interaction that you don't really see anywhere else.

Thorold Barker: Can you just talk a little bit about what are the key issues that really matter for you at this pivotal moment?

Elon Musk: You mean matter for me as an individual or?

Thorold Barker: Matter for you as an individual in terms of who leads the country, but also more broadly than that for the country and for your businesses? I mean, can you give your sense of where the real issues lie here?

Elon Musk: Well, I've said publicly that my preference and I think would be the preference of most Americans is really to have someone fairly normal in office. I think we'd all be quite happy with that actually. I think someone that is representative of the moderate views that I think most of the country holds in reality. But the way things are set up is that we do have a system that seems to push things towards the edges because of the primaries. So in order to win the primary, you've got to win obviously majority of your party's vote. In both cases that tends to cause the swing to the left and the right.

Thorold Barker: So if we go through the four names in the frame at the moment, can you just give us sort of yes, no and whether they're normal and sensible. So we've got Joe Biden.

Elon Musk: I mean, I think I need to be careful about these statements so I would maybe have to have a few drinks before I would give you the answers to all of them.

Thorold Barker: I will look forward to that and I look forward to...

Ryan Knutson: Musk doesn't always hold back his opinions though, and his views have often drawn criticism. For instance, recently he tweeted that billionaire and progressive donor, George Soros wants to "Erode the very fabric of society" and the quote, "Soros hates humanity."

Thorold Barker: You are obviously a big figure on Twitter and you're setting a tone and an aim. So I'm just curious as to whether that sort of debate which gets triggered, does that fit into the definition that you're trying to create in that new town square?

Elon Musk: Look, what I say is what I say. I'm not going to mitigate what I say because that would be inhibiting freedom of speech. That doesn't mean you have to agree with what I say. Nor does it mean if somebody says the total opposite that they would be supported on Twitter. They are. The point is to have a divergent set of views and free speech is only relevant if it's a speech by someone you don't like who says something you don't like, is that allowed? If so, you have free speech, otherwise you do not.

Thorold Barker: Can I just move on quickly to, because I don't want to go too far down that rabbit hole because that debate has played out on Twitter a bit is, are you back near profitability now?

Elon Musk: Twitter is not quite there, but we're not like when acquisition closed, I would say it's analogous to being teleported into a plane that's plunging to the ground with its engines on fire and the controls don't work. So discomforting to say the least. Now we have to do some pretty heavy-handed (inaudible) cutting company healthy, but we're at this point we're trending towards, if we get lucky, we might be cash positive next month, but it remains to be seen.

Thorold Barker: Okay. So I mean, one of the things you have talked about, you bought it for 44 billion. You've talked about it one day being worth 250 I think in internal meetings. Can you just talk about how you get there? What is the bigger vision? I mean, you want to bring back advertisers now and are they coming back by the way?

Elon Musk: Yeah.

Thorold Barker: Yeah. Can you give any idea of the scale of the comeback in terms of who you lost and who's coming back?

Elon Musk: Well, I think it'll be very significant. So the advertising agencies, this point of all lifted their warnings on Twitter, and so I think at this point I expect almost all advertisers to return.

Thorold Barker: Okay. You're running three very big companies. You have very big stakes and ownership control of two of those at least. What is your succession plan?

Elon Musk: Yeah, succession is one of the toughest age-old problems. It's plagued countries, kings and CEOs since the dawn of history. There is no obvious solution. I mean there are particular individuals identified as, that I've told the board, look, if something happens to me unexpectedly, this is my recommendation for taking over. So in all cases, the board is aware of who my recommendation is. It's up to them. They may choose to go different direction, but there is in worst case scenario, this is who should run the company.The control question is a much tougher question and something that I'm wrestling with and I'm frankly open to ideas because it certainly is true that the companies that I have created and are creating collectively possess immense capability. And so the stewardship of them is incredibly important. I'm definitely not of the school of automatically giving my kids some share of the companies, even if they have no interest or inclination or ability to manage the companies. I think that's a mistake.

Ryan Knutson: Coming up Elon Musk on whether artificial intelligence will annihilate humanity. Elon Musk has been involved with artificial intelligence projects for years. He was one of the founders of OpenAI, the company that launched ChatGPT, the chatbot with the uncanny ability to produce sophisticated answers. Tesla uses AI in its advanced driver assistance system, and Musk also just founded X.AI, a new AI startup, but for years he's also been sounding alarms about the dangers of AI and he signed a letter with some other tech leaders calling for a pause in AI development.

Thorold Barker: You've talked about the importance of regulation and you called for this moratorium. I mean the history of regulating tech has been checkered. It's been very hard for regulators to keep up with tech, let alone get ahead of it. What do you think actually needs to happen that practically could in this space to try to change that? Because obviously the history of this is not encouraging.

Elon Musk: Yeah. I mean I think should be, I've been pushing hard for a long time. I met with a number of senior senator and Congress, people of Congress in the White House to advocate for AI regulation, starting with an insight committee that is formed of independent parties as well as perhaps participants from the leaders in industry. But anyway, you figure out some sort of regulatory board and they start off gaining insight and then have proposed rulemaking and then we'll get commented on by industry. And then hopefully we have some sort of oversight rules that improve safety just as we do with aircraft, with the FAA and spacecraft and cars with NHTSA and food and drugs with the Food and Drug Administration.

Thorold Barker: Couple of things I just wanted to go into on AI, which I would love your perspective on. What does it mean for society in terms of is this going to embed wealth and power in a very small subset and create a big widening of inequality? Is it going to democratize and create the opposite? What is your sense of where this heads?

Elon Musk: In terms of access to goods and services, I think AI will be ushering a age of abundance. Assuming that we're in a benign AI scenario. I think the AI will be able to make goods and services very inexpensively.

Thorold Barker: And in the unbenign scenario?

Elon Musk: Well, there's a wide range of,

Thorold Barker: But what's the thing that you are most worried about? When you've been talking for years about the need for regulation, what is the scenario that really keeps you up at night?

Elon Musk: Well, I don't think the AI is going to try to destroy all humanity, but it might put us under strict controls and there's no non-zero chance of it going Terminator. It's not 0%, but I think it's a small likelihood of annihilating humanity, but it's not zero. We wanted that (inaudible) to be zeros, close to zero as possible. And then like I said, of AI, assuming control for the safety of all the humans and taking over all the computing systems and weapon systems of earth and effectively being some sort of uber nanny.

Thorold Barker: But isn't the more lightly nasty outcome that rather than AI taking over and being the ultimate nanny that keeps us all doing stuff that is super safe and it wants us to, that actually somebody nefariously harnesses that power to achieve societal control, stroke military superiority, and that actually some country around the world decides to use it in a different way.

Elon Musk: Yeah. That's what I mean by AI uses as a weapon and the pen is mightier than the sword. So one of the first places we have to be careful of AI being used is in social media to manipulate public opinion. So the reason that Twitter is going to a primarily subscriber based system is because it is dramatically harder to create. It's like quote 10,000 times harder to create an account that has a verified phone number from a credible carrier, that has a credit card and that pays a small amount of money per month. So whereas in the past someone could create a million fake accounts for a penny of peace and then manipulate, have something appear to be very much liked by the public when in fact it is not, or promoted and retweeted when in fact it is not. This popularity is not real and essentially gain the system.

Thorold Barker: So if we take it back to where we started, if you look at the election that's coming up, how big a role will this big shift in AI capability over the last few months, which will obviously continue through the next year, how big an impact is this going to play, do you think in the messaging and the way that people get told the different pitches of the candidates?

Elon Musk: I think that's something we need to go and look at in a big way is to make sure that we're minimizing the impact of AI manipulation.

Thorold Barker: Okay, but beyond Twitter, are you worried about this for the election in general?

Elon Musk: Yeah, there probably will be attempts to use AI to manipulate the public and some of it will be successful and if not this election, for sure the next one.

Thorold Barker: We talk a lot in terms of AI about the next five to 10 years and what the impact is going to be on jobs and some of these things. If you look out on a much longer timeframe, given the speed and scale of the change and you look to your grandkids and great grandkids, can you just give us a sense of what it is going to be like to be human? How much is this going to change the fundamental nature of how we operate as a race at this point?

Elon Musk: I think it's going to change a lot, especially if you go further out into the future. I mean there will be, everything will be automatic. I mean there'll be household robots that you can fully talk to as though there are people that can help you around the house. There'll be a companion or whatever the case may be. There will be humanoid robots throughout factories and cars will also be all automatic and anything where intelligence can be applied, even moderate intelligence will be automated. So if you say like 10, 20 years from now.

Thorold Barker: Okay. But the actual broad thrust of, I mean jobs will change, but it'll be more AI enabling and making it better and easier rather than wholesale complete change of the skills you need.

Elon Musk: I mean, it depends about what timeframe we're talking about here. So if you say over 2030 year timeframe, I think things will be transformed beyond belief. You probably won't recognize society in 30 years. I do think we're fairly close. You asked me about artificial general intelligence. I think we're perhaps only three years, maybe six years away from it, this decade. So in fact, arguably, we are on the event horizon of the black hole that is artificial super intelligence.

Ryan Knutson: That's all for today, Wednesday, May 24th. The Journal is a co-production of Gimlet and The Wall Street Journal. If you like our show, follow us on Spotify or wherever you get your podcasts. We're out every weekday afternoon. Thanks for listening. See you tomorrow.

See the article here:

Elon Musk on 2024 Politics, Succession Plans and Whether AI Will ... - The Wall Street Journal

Read More..

How Microsoft Swallowed Its Pride to Make a Massive Bet on OpenAI – The Information

Satya Nadella didnt want to hear it.

Last December, Peter Lee, who oversees Microsofts sprawling research efforts, was briefing Nadella, Microsofts CEO, and his deputies about a series of tests Microsoft had conducted of GPT-4, the then-unreleased new artificial intelligence large-language model built by OpenAI. Lee told Nadella Microsofts researchers were blown away by the models ability to understand conversational language and generate humanlike answers, and they believed it showed sparks of artificial general intelligencecapabilities on par with those of a human mind.

But Nadella abruptly cut off Lee midsentence, demanding to know how OpenAI had managed to surpass the capabilities of the AI project Microsofts 1,500-person research team had been working on for decades. OpenAI built this with 250 people, Nadella said, according to Lee, who is executive vice president and head of Microsoft Research. Why do we have Microsoft Research at all?

Excerpt from:

How Microsoft Swallowed Its Pride to Make a Massive Bet on OpenAI - The Information

Read More..

Local third grader earns national recognition in poster contest – NEWS10 ABC

Image of Sahana's award winning poster via Center for Internet Security

EAST GREENBUSH, N.Y. (NEWS10) Sahana, a third-grader from Genet Elementary School, was one of 10 students recognized in a national poster contest, highlighting dangers children can face online. Sahanas poster was picked from hundreds of submissions across the country.

Sahanas submission will be made into a poster and featured in the Center for Internet Securitys 2023 Kids Safe Online activity book. She will receive an award for her artwork at the welcome ceremony at the New York State Plaza Cybersecurity Conference taking place at the Empire State Plaza Convention Center on Tuesday, June 6.

The contest was open to all students in public and private schools and youth organizations from kindergarten through 12th grade in all 50 states.

Students of all ages are connected across a variety of devices, like phones, tablets, school laptops, and gaming systems, said Karen Sorady, Vice President, MS-ISAC Member Engagement at the Center for Internet Security. The Kids Safe Online poster contest is a terrific way to not only educate our kids about making smart choices and protecting their personal information, but it also empowers them to identify and report potential online dangers to keep their friends and communities safer.

Follow this link:
Local third grader earns national recognition in poster contest - NEWS10 ABC

Read More..

The value of Internet Security Services – theleader.info by The … – The Leader Newspaper

In a world where info breaches are cyber security services common, cybersecurity is more important than ever before. The resulting damage to businesses can be disastrous, and the reduction in buyer trust can easily have long lasting effects.

Cybersecurity is a large discipline that involves everything from guarding hardware and software against viruses to providing tragedy recovery companies. It also may include educating employees method stay safe on line. Managing web security requires a team of execs who can determine and manage the risks, dangers and weaknesses of your institution.

Todays business operations rely on networks of computers and smart products. They shop vast amounts of data, including Personally Identifiable Details (PII) just like passwords, fiscal information and intellectual property. This is a target with regards to criminals that can use the data for extortion, blackmail, or other offences. In addition , significant infrastructure including hospitals, ammenities and banking companies are dependent on these kinds of devices to function, which makes them vulnerable.

The average company engages dozens of staff and includes thousands of clients. Every one of these individuals may be targeted by cybercriminals, and it is important that businesses protect their particular systems from being breached.

In addition to ensuring that all equipment, software and data is certainly protected via malicious problems, cyber reliability solutions includes regular revisions to prevent bugs from exploiting holes in the system. Additionally , companies should teach their personnel on how to continue to be secure over the internet, including steering clear of clicking shady links and downloading untrustworthy applications. This can help reduce the risk of an information breach and maintain the company in good standing with its customers.

Read more:
The value of Internet Security Services - theleader.info by The ... - The Leader Newspaper

Read More..

What challenges do we face five years after the launch of the … – Open Access Government

On 25th May, the EU implemented the General Data Protection Regulation shortened to GDPR which ultimately changed the way we deal with data.

The European data protection law gives individuals more control over their personal information and enforces any company collecting the personal data of EU citizens to reframe how they think about data privacy. Ultimately, it forced organisations to make privacy by design paramount.

Failure to comply with the law can lead to severe consequences. GDPR gave the EU power to levy harsh fines against businesses that violate its privacy and security standards, with penalties reaching into the tens of millions of Euros.

Some of the largest companies in the world, including Apple, Amazon, British Airways, Google and Meta, have incurred significant penalties for failing to meet GDPR standards.

The influence of GDPR has been so far-reaching that countries, including Japan, Brazil and South Korea, have all introduced their own data privacy law modelled on GDPR. In 2018, California adopted the Californian Consumer Privacy Act (CCPA), which had many similarities with the GDPR.

The European Commission is criticised for many things, but GDPR is the one thing where it can hold its head up high and say, weve led the world in this

The European Commission is criticised for many things, but GDPR is the one thing where it can hold its head up high and say, weve led the world in this, said Paul Brucciani, Cyber Security Advisor at WithSecure.

As regulatory milestones go, its the equivalent of climbing Everest. And it seems to be working as other jurisdictions are following suit.

Michael Covington, VP of Strategy at Jamf, also agrees on the impact and importance of GDPR.

The EUs GDPR has had a tremendous impact on how organisations around the globe handle personal user data since the regulation went into effect five years ago, said Covington.

The threat of substantial fines including the almost 3 billion that have been levied since the regulation went into effect has forced companies to take privacy and security more seriously. And the impact is not just contained within Europe; GDPR has inspired over 100 other regional privacy standards, including those in many of the individual US states.

Now that we have arrived at the fifth anniversary of GDPR, it is a perfect time to reflect on what can be improved. Businesses and the cybersecurity industry shouldnt just be asking themselves how they comply with GDPR but how they go above and beyond to ensure that data is secure and protected.

For some organisations, GDPR can be seen a bit like taking an exam. Instead of ensuring compliance and improving overall cyber resilience throughout the year, businesses are scrambling to ensure compliance just in time for quarterly or annual audits.

Sylvain Cortes, VP of Strategy at Hackuity, believes that organisations cannot continue this mad cycle of exam cramming.

He urges companies to take the opportunity to test systems for compliance specifications, like those in GDPR article 32, to improve their overall cyber resilience.

Compliance is essential, but we urge organisations to take the opportunity to think beyond baseline requirements to develop a culture of continuous cyber improvement, said Cortes.

Its important to remember that achieving compliance shouldnt be treated like exam-cramming with last-ditch efforts to achieve annual or quarterly audits.

Cortes also said that GDPR was not a one-off compliance tick box in 2018, and nor is it today: The goal is to achieve more than the minimum requirements and move away from the tick-box mindset. GDPR compliance is necessary, but it is far from sufficient for modern organisations.

Even though organisations are still facing plenty of the same challenges when it comes to GDPR compliance, there are new challenges as well. In 2018, terms such as generative AI, ChatGPT and biometrics were not even in the minds of people when GDPR was introduced; however, five years later, they are at the forefront of every conversation when it comes to technology and IT.

As organisations introduce these new technologies to the workplace, the importance of GDPR compliance does not waver. Brucciani believes the rise of AI is one of the biggest challenges facing the EU from a regulatory standpoint.

Internet fragmentation, driven by the quest for digital power, is creating regulatory complexity, and the EU has an important role in leading the world through this, said Brucciani.

For example, AI is the next big field that will need regulating, and the EU has again made a head start on this with its proposed AI Act, a legal framework that is intended to be innovation-friendly, future-proof and resilient to disruption.

Eduardo Azanza, CEO at Veridas, also argues that trust in new technology, such as biometrics, is built by ensuring that standards in regulations are met.

With the rise of biometrics and AI, the focus on data protection and privacy has never been more important, said Azanza. Questions should be asked of biometric companies to ensure they are following GDPR laws and are transparent in how data is stored and accessed.

Trust in biometric solutions must be based on transparency and compliance with legal, technical, and ethical standards. Only by doing this can we successfully transition to a world of biometrics that protects our fundamental right to data privacy.

Ultimately, five years on from GDPR, many organisations still face plenty of challenges when it comes to compliance. However, regulations, such as GDPR, are essential. Organisations should not look to just comply with them but go above and beyond them.

When we see the rise of the likes of ChatGPT, our first question is always, Is our data safe? Lets not forget that GDPR is just as, or even more important now, than it was five years ago when the EU implemented the revolutionary law.

This piece was written and provided by Robin Campbell-Burt, CEO of Code Red.

Editor's Recommended Articles

Go here to see the original:
What challenges do we face five years after the launch of the ... - Open Access Government

Read More..