Study of AI and its heroes, villains in Silicon Valley – Study International News

The study of AI (artificial intelligence) and its development is as complex as its promising.

Its precisely why those skilled in this field enjoy what many consider a progressive career a job where theres growth in its difficulty and responsibility.

And there are many such roles today.

Computer science and information technology employment was predicted to grow 11% from 2019 to 2029 adding about 531,200 new jobs in the industry with a higher-than-average salary, according to the US Bureau of Labor Statistics.

The World Economic Forum ranked AI and Machine Learning Specialist #2 on its list of Top 20 job roles in increasing and decreasing demand across industries.

But that does not seem to be the case for Sam Altman, CEO of OpenAI last weekend.

Open AI is the company that kicked off an AI arms race when its AI chatbot, ChatGPT, first debuted last November. It was dubbed the best artificial intelligence chatbot ever released to the general public.

Altman quickly became the face of GenAI. A few months later, Microsoft invested US$1 billion in OpenAI to build artificial general intelligence: i.e. a machine that could do anything the human brain could do.

Altman was compared to Bill Gates, the co-founder of software giant Microsoft.

Then, last weekend, a stunning fall from grace.

On Nov. 17, 2023, Altman was dismissed abruptly following what OpenAI said was a deliberative review process by the board, which concluded that he was not consistently candid in his communications with them, hindering its ability to exercise its responsibilities.

At the time, OpenAIs board was composed of six members three co-founders and three non-staff members:

Other sources such as AFP reported that the turmoil escalated the differences between Altman who has become the face of generative AIs rapid commercialisation since ChatGPTs arrival a year ago and Open AIs board members who expressed deep reservations about the safety risks posed by AI as it gets more advanced.

These are signs of cracks within Silicon Valley.

More importantly, it begs the question: why is there such drama surrounding the study of AI and its development?

While generative AI has disrupted many lives and industries across the globe, some world leaders have grown petrified about the potential of limitless power.

Even before ChatGPT, the US government has warned of the dangers of AI in wiping out jobs.

The issue is not that automation will render the vast majority of the population unemployable, said Jason Furman, Obamas chief economist and chairman of the US Council of Economic Advisors.

Instead, jobs created by AI could come too slowly, pay too little, and exclude the least skilled who need them most. Workers who lack the skills or opportunity to quickly find new, decent jobs enabled by automation could find themselves effectively excluded from the job market. That eaves us with the worry that the only reason we will still have our jobs is because we are willing to do them for lower wages.

The warnings continued in the years that followed.

In May this year, scientists and tech industry leaders, including high-level executives at Microsoft and Google, issued a new warning about the perils that AI poses to humankind.

Worries about AI systems outsmarting humans have intensified with the rise of a new generation of capable AI chatbots like Bard, ChatGPT, AppleGPT, and many others like it.

It sent countries across the globe to regulate developing technologies, with the European Union blazing the trail with its proposed AI Act.

Higher education institutions have responded in their own ways, too.

The Institute for Ethics in AI at the University of Oxford brings together world-leading philosophers and other experts in the humanities with technical developers and users of AI in academia, business and government.

Researchers here focus on investigating the ethical impacts from all perspectives, covering six themes: AI and Democracy, AI and Governance, AI and Human Rights, AI and Human Well-Being, AI and the Environment, and AI and Society.

The University of Melbourne has micro certificates in Introduction to the Ethics of Artificial Intelligence.

Informed by leading research from the Centre of Artificial Intelligence and Digital Ethics (CAIDE), this certificate, among many others, explores how to apply ethical frameworks and theories to AI in your workplace.

But where does Silicon Valley stand in the study of AI and its progress?

Known as the face of ChatGPT, Sam is the CEO and co-founder of OpenAI (theres three co-founders). He was the former President of Y Combinator, a startup accelerator. Source: AFP

The five days of chaos surrounding Altmans position in OpenAI exposed the controversies surrounding Silicon Valley and its study of AI and its development.

Heres what went down:

While OpenAI has been tight-lipped behind the reason for Altmans departure, one report suggests that Sutskever pivotal in developing OpenAIs ChatGPT and wanting highly advanced systems behaving according to defined limits initiated the recent coup.

In this, how much does the education of these influential figures affect their views on the study of AI and its development?

Sam Altman, a tech visionary and entrepreneur, is a name synonymous with innovation.

Altman dropped out of Stanford in 2005 to create Loopt, a location-sharing app, eventually selling it for US$43.4 million to Green Dot in 2012.

In 2011, he joined the influential startup incubator Y Combinator before heading to OpenAI in 2019.

As the CEO of OpenAI, Altman catapulted ChatGPT to global fame and has become Silicon Valleys sought-after voice on the promise and potential dangers of AI.

I cant imagine that this would have happened to me, Altman told Intelligencer about his new role as leader of the AI movement.

Altman believes AI technology will reshape society as we know it. While he thinks it comes with real dangers, it can also be the greatest technology humanity has yet developed to enhance our lives significantly.

Once the Chief Technology Officer at Spike, Brockman left to cofound OpenAI with Elon Musk, Sam Altman, and Ilya Sutskever. Source: AFP

Greg Brockman is the President and co-founder of OpenAI.

Greg attended Harvard University and Massachusetts Institute of Technology (MIT), dropping out of both.

At Harvard, he collaborated with the Harvard Computer Society to administer and build computer systems. At MIT, he worked on projects like XVM and Linerva.

He later left to contribute to the founding of Stripe, an Irish-American multinational financial services and software as a service (SaaS) company dual-headquartered in South San Francisco, California, the US and Dublin, Ireland.

In May 2015, Brockman left Stripe to co-found OpenAI with Altman. With a genuine belief in AIs potential for positive impact, Brockman advocates for ethical and responsible development.

We must ensure AI benefits all of humanity, Brockman asserts, underscoring OpenAIs commitment to advancing the field while prioritising safety and inclusivity.

His loyalty to Altman goes deep as Brockman said he was departing as president hours after the board pushed out Altman. In a post on the social media site X, he wrote: Based on todays news, I quit.

Toner joined AIs board in 2021. She is a director of strategy and foundational research grants at Georgetowns Center for Security and Emerging Technology. Source: AFP

Helen Toner, a board member and director of strategy at Georgetowns Centre for Security and Emerging Technology (CSET), holds an MA in Security Studies from Georgetown, a BSc in Chemical Engineering, and a Diploma in Languages from the University of Melbourne.

Before joining CSET, Toner lived in Beijing, studying the Chinese AI ecosystem as a research affiliate of Oxford Universitys Center for the Governance of AI.

When it comes to AI, she is clear-eyed about the risks of generative AI.

Toner, who co-authored a paper, has cautioned against excessive reliance on AI chatbots and advocated for US government action to balance innovation with citizen protection from AI risks.

This led her to clash with Altman over an academic paper comparing the safety approaches of OpenAI and Anthropic.

Microsoft invested US$13 billion into OpenAI, yet they were unaware that Sam Altman was fired. Source: AFP

Satya Nadella, the CEO of Microsoft, has a degree in electrical engineering from the Manipal Institute of Technology, an MS in computer science from the University of WisconsinMilwaukee and an MBA from the University of Chicago.

In an interview, Nadella shared his perspective on AI, saying, Technology will provide more and more ways to bring people together.

He believes in AIs potential to empower people and transform industries. I see these technologies acting as a co-pilot, helping people do more with less, he stated passionately.

Microsoft is OpenAIs largest investor, with over US$10 billion stake.

The Microsoft CEO reached out to Altman following the firing to offer him support in his next steps.

Ilya is OpenAIs chief scientist, co-founder and a board member who appears to have played an outsized role in Altmans firing. Source: AFP

Ilya Sutskever is OpenAIs chief scientist and co-founder and one of the board members whom Altman clashed with over some aspects, including the pace of developing generative AI.

He graduated from the University of Toronto with a bachelors degree in Mathematics in 2005, a Master of Science in Computer Science in 2007, and a Doctor of Philosophy in 2013.

In 2015, after a short stint at Google, Sutskever co-founded OpenAI and eventually became its chief scientist; so critical was he to the companys success that Elon Musk has taken credit for recruiting him.

In an interview with MIT Technology Review, Sutskever expressed his focus on preventing artificial superintelligence from going rogue.

Artificial superintelligence refers to a hypothetical level of AI that surpasses human intelligence in virtually all aspects.

In fact, the OpenAI leadership shakeup centred on AI safety, with Sutskever disagreeing with Altman on the pace of commercialising generative AI and measures to reduce public harm

Its obviously important that any superintelligence anyone builds does not go rogue, Sutskever says.

However, despite all the fiasco that has happened, Sutskever has since publicly apologised on the X platform.

He expressed regret for his decisive vote against Altman and indicated his renewed support for Altman.

Go here to see the original:

Study of AI and its heroes, villains in Silicon Valley - Study International News

Related Posts

Comments are closed.