Category Archives: Artificial Super Intelligence

3 AI-Backed Stocks That Could Return Magnificent Gains in 2024 – The Motley Fool

If you don't already own artificial intelligence stocks, you're likely to be missing out on one of the biggest technology inflections in history. But if you fear you've already missed the boat, keep in mind many key AI industry participants still trade below their 2021 highs.

But as interest rates stabilize and AI tailwinds sustain as many suspect, look for these three names to make new all-time highs -- likely in 2024.

The AI world got a shock on Friday, when OpenAI CEO Sam Altman was fired by OpenAI's board of directors. While the situation appears fluid and Altman may be able to return, it is clearly a less-than-ideal situation.

In the cloud industry, OpenAI investor Microsoft (MSFT -0.11%) is thought to have the AI lead because of the OpenAI partnership, but the current chaos may have thrown that "lead" into question. Meanwhile Amazon (AMZN 0.02%), Microsoft's chief rival in the cloud computing space, is making its own AI moves.

September was actually a momentous month for Amazon's AI ambitions. Amazon Web Services made its AWS Bedrock service generally available to enterprise customers. Bedrock is AWS's generative AI platform, whereby companies will be able to access large language models (LLMs) from leading AI start-ups AI21 Labs, Anthropic, Cohere, Stability AI, and also Meta Platforms' LLM, called Llama. In addition, Amazon has pre-trained models of its own called Titan, which customers can combine with their own private data to glean insights. Finally, Amazon's AI-powered Code Whisperer helps developers write and implement software code quickly and efficiently with natural language prompts.

September also saw Amazon announce a strategic collaboration with AI start-up Anthropic. In exchange for a minority investment up to $4 billion, Anthropic will commit to using AWS as its primary cloud provider, and use Amazon's in-house-designed Trainium and Inferentia chips. The deal in many ways is Amazon's answer to Microsoft's collaboration with OpenAI, so we will see if the Anthropic deal gives Amazon a leg up in the AI wars.

And of course, Amazon is an innovative company with huge scale across its e-commerce, advertising, and other consumer businesses. That size and data advantage should also allow Amazon's other businesses to benefit from efficiencies gleaned from AI. And that may already be happening; last quarter, Amazon's non-cloud North American business grew 11%, and its International business grew 16%, which are very healthy rates for businesses that large.

Given that Amazon is still 25% below its all-time highs, Amazon is a "Prime" candidate for a strong 2024.

As the cloud computing business seems to be bottoming out, so is the memory industry. Micron Technology (MU -0.30%) is one of only three major DRAM manufacturers, and the only one based in the United States.

Fortunately for Micron, artificial Intelligence servers require multiples more DRAM than traditional enterprise servers, and research firm Trendforce recently projected AI server unit shipments will grow at a mid-teens rate for the next five years.

That should help underpin the DRAM market, which is due for an upturn even outside of AI servers. The post-pandemic period led to the worst-ever drop in demand for PC and mobile DRAM in mid-2022, but that long down-cycle has also shown recent signs of turning around:

MU EBIT (Quarterly) data by YCharts

Not only that, but Micron has overtaken its rivals on leading technology nodes over the past year. A year ago, Micron was the first company to manufacture DRAM on the 1-beta node. Recently, Micron introduced its new 128 GB RDIMM DRAM module built on 32GB DDR5 DRAM dies that is highly desirable for AI applications. And next year, Micron will begin shipping its new high-bandwidth memory (HBM3) for AI applications, whose specs exceed those of competitors' offerings in the market today.

With the memory market bottoming out and AI-related demand tailwinds just starting to kick in, Micron should see its current losses turn into profits -- potentially, big profits -- next year.

Unlike Amazon and Micron, server maker Super Micro Computer (SMCI -0.34%) reached an all-time high earlier this year, but it has backtracked about 20% off those highs from early August. Despite its outperformance over the past two years, shares still don't look expensive at 26 times trailing earnings and 16.7 times 2024 earnings estimates, with Super Micro's fiscal year ending next June.

Super Micro's energy-efficient servers with unique features such as liquid cooling and building-block architecture have found favor with artificial intelligence companies. Over the past year, the majority of SMCI's revenue now come from AI-related servers. Given the hypergrowth projected for AI servers going forward, Super Micro should be a strong grower not only this year, but for years to come.

This year, Super Micro announced a new Malaysia manufacturing plant that will come online in 2024, which should double the capacity of the company and lower its manufacturing costs significantly. And just two weeks ago, Super Micro announced it can now deliver 5,000 server racks per month as a result of surging demand. Why is this important? Because just two quarters ago, management had hoped to reach 4,000 racks per month by year-end. That means Super Micro is exceeding its own goals in meeting strong demand.

Super Micro also plans to grow well beyond this year. While it has guided for revenue of $10 billion to $11 billion in fiscal 2024, CEO Charles Liang has set a goal for $20 billion, which he sees, "just a couple years away." Super Micro has a profitable history of beating its own guidance and publicly stated goals, so the company could get there even faster.

That makes it a stock that can soar even further in 2024.

John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. Randi Zuckerberg, a former director of market development and spokeswoman for Facebook and sister to Meta Platforms CEO Mark Zuckerberg, is a member of The Motley Fool's board of directors. Billy Duberstein has positions in Amazon, Meta Platforms, Micron Technology, Microsoft, and Super Micro Computer and has the following options: short January 2025 $110 puts on Super Micro Computer, short January 2025 $125 puts on Super Micro Computer, short January 2025 $130 puts on Super Micro Computer, short January 2025 $280 calls on Super Micro Computer, short January 2025 $380 calls on Super Micro Computer, and short January 2025 $85 puts on Super Micro Computer. His clients may own shares of the companies mentioned. The Motley Fool has positions in and recommends Amazon, Meta Platforms, and Microsoft. The Motley Fool recommends Super Micro Computer. The Motley Fool has a disclosure policy.

More:

3 AI-Backed Stocks That Could Return Magnificent Gains in 2024 - The Motley Fool

AI and the law: Imperative need for regulatory measures – ft.lk

Using AI Technology, without the needed laws and policies to understand and monitor it, can be risky

The advent of superintelligent AI would be either the best or the worst thing ever to happen tohumanity. The real risk with AI isnt malice but

competence. A super-intelligent AI will be extremely good at accomplishing its goals and if those goals arent aligned with ours were in trouble.1

Generative AI, most well-known example being ChatGPT, has surprised many around the world, due to its output to queries being very human likeable. Its impact on industries and professions will be unprecedented, including the legal profession. However, there are pressing ethical and even legal matters that need to be recognised and addressed, particularly in the areas of intellectual property and data protection.

Firstly, how does one define Artificial Intelligence? AI systems could be considered as information processing technologies that integrate models and algorithms that produces capacity to learn and to perform cognitive tasks leading to outcomes such as prediction and decision-making in material and virtual environments. Though in general parlance we have referred to them as robots, AI is developing at such a rapid pace that it is bound to be far more independent than one can ever imagine.

As AI migrated from Machine Learning (ML) to Generative AI, the risks we are looking at also took an exponential curve. The release of Generative technologies is not human centric. These systems provide results that cannot be exactly proven or replicated; they may even fabricate and hallucinate. Science fiction writer, Vernor Vinge, speaks of the concept of technological singularity, where one can imagine machines with super human intelligence outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders and potentially subduing us with weapons we cannot even understand. Whereas the short term impact depends on who controls it, the long-term impact depends on whether it cannot be controlled at all2.

The EU AI Act and other judgements

Laws and regulations are in the process of being enacted in some of the developed countries, such as the EU and the USA. The EU AI Act (Act) is one of the main regulatory statutes that is being scrutinised. The approach that the MEPs (Members of the European Parliament) have taken with regard to the Act has been encouraging. On 1 June, a vote was taken where MEPs endorsed new risk management and transparency rules for AI systems. This was primarily to endorse a human-centric and ethical development of AI. They are keen to ensure that AI systems are overseen by people, are safe, transparent, traceable, non-discriminatory and environmentally friendly. The term AI will also have a uniform definition which will be technology neutral, so that it applies to AI systems today and tomorrow.

Co-rapporteur Dragos Tudovache (Renew, Romania) stated, We have worked to support AI innovation in Europe and to give start-ups, SMEs and industry space to grow and innovate, while protecting fundamental rights, strengthening democratic oversight and ensuring a mature system of AI governance and enforcement3.

The Act has also adopted a Risk Based Approach in terms of categorising AI systems, and has made recommendations accordingly. The four levels of risk are,

Unacceptable risk (e.g., remote biometric identification systems in public),

High risk (e.g., use of AI in the administration of justice and democratic processes),

Limited risk (e.g., using AI systems in chatbots) and

Minimal risk (e.g., spam filters).

Under the Act, AI systems which are categorised as Unacceptable Risk will be banned. For High Risk AI systems, which is the second tier, developers are required to adhere to rigorous testing requirements, maintain proper documentation and implement an adequate accountability framework. For Limited Risk systems, the Act requires certain transparency features which allows a user to make informed choices regarding its usage. Lastly, for Minimal Risk AI systems, a voluntary code of conduct is encouraged.

Moreover, in May 2023, a judgement4 was given in the USA (State of Texas), where all attorneys must file a certificate that contains two statements stating that no part of the filing was drafted by Generative AI and that language drafted by Generative AI has been verified for accuracy by a human being. The New York attorney had used ChatGPT, which had cited non-existent cases. Judge Brantley Starr stated, [T]hese platforms in their current states are prone to hallucinations and bias.on hallucinations, they make stuff up even quotes and citations. As ChatGPT and other Generative AI technologies are being used more and more, including in the legal profession, it is imperative that professional bodies and other regulatory bodies draw up appropriate legislature and policies to include the usage of these technologies.

UNESCO

On 23 November 2021, UNESCO published a document titled, Recommendations on the Ethics of Artificial Intelligence5. It emphasises the importance of governments adopting a regulatory framework that clearly sets out a procedure, particularly for public authorities to carry out ethical impact assessments on AI systems, in order to predict consequences, address societal challenges and facilitate citizen participation. In explaining the assessment further, the recommendations by UNESCO also stated that it should have appropriate oversight mechanisms, including auditability, traceability and explainability, which enables the assessment of algorithms and data and design processes as well including an external review of AI systems. The 10 principles that are highlighted in this include:

Proportionality and Do Not Harm

Safety and Security

Fairness and Non-Discrimination

Sustainability

Right to Privacy and Data Protection

Human Oversight and Determination

Transparency and Explainability

Responsibility and Accountability

Awareness and Literacy

Multi Stakeholder and Adaptive Governance and Collaboration.

Conclusion

The level of trust citizens have in AI systems can be a factor to determine the success in AI systems being used more in the future. As long as there is transparency in the models used in AI systems, one can hope to achieve a degree of respect, protection and promotion of human rights, fundamental freedoms and ethical principles6. UNESCO Director General Audrey Azoulay stated, Artificial Intelligence can be a great opportunity to accelerate the achievement of sustainable development goals. But any technological revolution leads to new imbalances that we must anticipate.

Multi stakeholders in every state need to come together in order to advise and enact the relevant laws. Using AI Technology, without the needed laws and policies to understand and monitor it, can be risky. On the other hand, not using available AI systems for tasks at hand, would be a waste. In conclusion, in the words of Stephen Hawking7, Our future is a race between the growing power of our technology and the wisdom with which we use it. Lets make sure wisdom wins.

Footnotes:

1Pg 11/12; Will Artificial Intelligence outsmart us? by Stephen Hawking; Essay taken from Brief Answers to the Big Questions John Murray, (2018)

2 Ibid

3https://www.europarl.europa.eu/news/en/press-room/20230505IPR84904/ai-act-a-step-closer-to-the-first-rules-on-artificial-intelligence

4https://www.theregister.com/2023/05/31/texas_ai_law_court/

5 https://www.unesco.org/en/articles/recommendation-ethics-artificial-intelligence

6 Ibid; Pg 22

7 Will Artificial Intelligence outsmart us? Stephen Hawking; Essay taken from Brief Answers to the Big Questions John Murray, (2018)

(The writer is an Attorney-at-Law, LL.B (Hons.) (Warwick), LL.M (Lon.), Barrister (Lincolns Inn), UK. She obtained a Certificate in AI Policy at the Centre for AI Digital Policy (CAIDP) in Washington, USA in 2022. She was also a speaker at the World Litigation Forum Law Conference in Singapore (May 2023) on the topic of Lawyers using AI, Legal Technology and Big Data and was a participant at the IGF Conference 2023 in Kyoto, Japan.)

More here:

AI and the law: Imperative need for regulatory measures - ft.lk

Are chatbots, super apps and AI the future of European marketplaces? – InternetRetailing

Marketplaces have evolved rapidly. This momentum has been maintained as they now embrace technology and new ways to sell.

The recently published European Marketplace 2023 report highlights some new technologies and even new business models that will be important as the sector evolves even further next year.

AI-powered recommendationsThe need to differentiate through better product recommendations in a crowded market and even using tech to appear to allow shoppers to spontaneously discover new things is about to step up thanks to artificial intelligence (AI).

Recommendation engines on marketplaces and retailer websites are nothing new but their level of sophistication is growing as ever-more powerful AI is brought to bear. This garners much deeper insights into consumer behaviour, allowing for much richer and more nuanced recommendations.

And its not just for existing customers. The holy grail is to be able to recommend things at least in the right ballpark to anonymous customers. Todays (and even more so, tomorrows) AI is starting to be able to do this by having the power to better understand what these new customers look at, where they linger and how they behave on the site. This can then help deliver much more intelligent suggestions and guide shoppers not just to products but also to discounts and offers and even to sign up.

This is vital to marketplaces of all types. The sector is becoming increasingly competitive and, whether a flat, generalised marketplace or a vertical specialist, understanding what customers want is vital.

This trend will be seen in 2024 across all ecommerce, not least as retailers attempt to make their non-marketplace sites perform better than their marketplace competitors. Because of this, marketplace offerings that dont look to AI-powered, deeply intelligent recommendations will struggle.

Chatbots, AI and voice commerceAI will not just be confined to recommendations it has a role to play across the whole ecommerce process, from marketplaces to traditional D2C sites. One of the key areas where it will come to the fore is in chatbots and other tools that allow communication between a marketplace, the retailer and customers.

Of course, chatbots are already used across ecommerce, handling FAQs via instant messaging and routing queries to customer service agents based on intelligently assessing what the consumer is asking. This is very much chatbot 1.0 though and as AI has exponentially increased in power and ubiquity over the past 24 months, so too have the power of AI run chatbots.

Moving on from simple rules-based responses to key words and phrases to find answers or redirect a query, todays AI allows for self-learning that leads to a degree of understanding. The use of generative AI technologies such as GPT4 can also help create bespoke and highly conversational answers. Combing this self-learning and generative approach with rules-based systems can, in theory, create a much more realistic although still totally artificial interaction between brand and customer.

While this can help handle far more consumer interactions than a team of humans ever could, its true potential lies in how to elevate all these interactions beyond being just reactive to customer comments into towards creating genuine two-way conversations akin to how an old-style shop assistant may have helped guide customers to buy. This has the potential along with AI-powered recommendations to create a new paradigm in online selling, cross-selling and up-selling.

For todays marketplaces, this combination is likely to be a much-needed differentiator in the competitive years ahead. It also once again shifts the nature of ecommerce from something that a customer does, towards something that happens to, or, even better, with the consumer.

News in late 2023 that Facebook owner Meta is poised to release chatbots with a personality on Facebook Messenger will only fuel this growth in conversational interaction.

The application of AI to voice also lends itself to voice commerce. Already well documented when smart speakers such as Amazons Echo and Apples HomePod first hit the headlines, the use of voice to interact with websites and sellers has, to some degree, failed to ignite widespread interest. Yet with natural language processing (NLP) and generative

AI surging ahead, the ability to interact verbally with marketplaces is likely to return to the agenda only this time, it wont just be through smart speakers. To gain mass appeal, it will be through the websites and apps of the marketplaces themselves.

It seems inevitable that the internet will slowly edge towards being some sort of metaverse a more immersive semi-naturalistic platform where interaction is less about typing and clicking and more about pointing and speaking. When it does, natural voice interaction and chatbots are likely to become one of the main ways we all use the web, including how we shop on marketplaces.

Messaging, payments and super appsThis shift towards talking to marketplace apps is potentially a huger, broader shift in how everyone will interact with the internet. In the instance of marketplaces, it will allow consumers to talk to these vendors actually, their AI-powered apps to search, get recommendations, discuss products and then buy them in a much more naturalistic way.

Already many younger people are communicating by sending each other voice messages. Apple has added the ability to send video messages via its FaceTime messaging app. Facebook is, as said, creating chatbots with personality. The way we access the web is already changing.

Even for those who arent shifting to this new way of communicating with the digital world, messaging services such as SMS, iMessage, WhatsApp and social media messaging are all increasingly playing a role in how consumers interact with the companies they do business with. The era of conversational commerce be that through text or voice is upon us and its set to create some radical new ways in which we shop. Social media sites, for instance, are shifting from carrying promotional posts about retailers to allowing consumers to buy from them, adding to this conversational commerce model.

Combining messaging and social engagement with shopping and indeed payments can create a powerful new marketplace model. Bringing them all together in one place to create a super app has the potential to build a new, rich way to interact with retailers which, in turn, can lead to greater sales.

Such super apps already exist in China. WeChat, for example, combines social media, messaging, payment and ecommerce in one app. Elon Musks rebrand of Twitter to X and the changes he has instigated at the platform are rumoured to be laying the groundwork to turn X into such a super app. For marketplaces, this presents an opportunity.

The platforms already have the customer base, the products and the payments tools. Add in messaging and engagement and they could relatively easily shift to being super apps. Conversely, social media platforms have the customer base, the messaging and the retailers on-board. As they add ecommerce, they too are also poised to do the same.

As the internet slowly edges towards being an immersive metaverse, these super apps would be perfectly positioned to usher in a whole new modus operandi for online sellers and customers radically altering not only what constitutes a marketplace, but also what the internet actually looks like to its users.

This feature was authored by Paul Skeldon, and appears in the ChannelX European Marketplaces 2023 report.

Download it in full to discover what the marketplace landscape looks like today; what factors are key to a successful marketplace; and how marketplaces are working to try and protect both brands and customers from fraud, counterfeits, and piracy?

See the article here:

Are chatbots, super apps and AI the future of European marketplaces? - InternetRetailing

1 Under-the-Radar Artificial Intelligence (AI) Stock to Buy Hand Over … – The Motley Fool

One of the lesser-known enterprise technology companies benefiting from artificial intelligence (AI) is ServiceNow (NOW 0.77%). The company specializes in business process automation across a variety of IT services.

Bill McDermott became CEO of ServiceNow back in October 2019 after a long, successful run at SAP. Since then, the stock is up over 160%. A good reason investors have enjoyed such robust returns is that ServiceNow has become on of the prominent players in digital transformation. Utilizing data to make more informed, impactful decisions is becoming increasingly important for businesses of all sizes. While there are a number of dashboarding tools and data analytics providers, ServiceNow has surfaced as one of the leading platforms due to its ever-evolving library of product offerings.

While the stock has been generous to investors for several years now, I think it could just be getting started. In fact, ServiceNow made The Fool's list of most undervalued growth stocks for 2023.

AI is a massive catalyst for the company, and its current financial and operating performance demonstrates that. While it may not be as well known as Microsoft, Alphabet, or Amazon, there is plenty of reason to believe that AI is helping ServiceNow evolve into an even more integral platform for businesses of all sizes. Let's assess if the stock deserves a spot in your portfolio.

As with its big tech counterparts, ServiceNow's management has been touting the prospects of AI for the last several months. The company derives revenue from two primary sources: subscriptions and professional services. Subscriptions represent high-margin recurring revenue streams, so investors tend to scrutinize trends in this metric.

For the quarter that ended Sept. 30, ServiceNow reported $2.2 billion in subscription revenue and 27% growth year over year. Even better is that the gross margin for subscription services clocked in at 81%. This high level of profitability has helped ServiceNow generate consistent positive free cash flow, which the company can use for share buybacks or to reinvest into new products and services.

During ServiceNow's Q3 earnings call, McDermott discussed why he thinks AI will help fuel continuous growth. Specifically, he referenced a study by IT research firm Gartner that estimates $3 trillion will be spent on AI-powered solutions between 2023 and 2027. Furthermore, McDermott proclaimed that AI "isn't a hype cycle; it is a generational movement."

Image source: Getty Images.

Software companies often spend a long time testing and demoing their products. Although this often makes the sales cycle and vendor procurement process long and arduous, it is paramount that devices and systems work together seamlessly. The various software platforms that companies rely on is called a tech stack. In a way, the tech stack represents the nuts and bolts that hold everything together. If important data is stored across multiple systems but cannot easily be stitched together, the tech stack probably isn't as well-managed as it could be. Despite this challenging process, ServiceNow is answering the call.

According to its Q3 earnings report, the company has released over 5,000 add-ons and new capabilities for its various modules in 2023 alone, many of which are rooted in generative AI. Moreover, investors learned that 18 of the company's top 20 net new annual contract value deals during Q3 involved eight products or more. This level of cross-selling is precisely why ServiceNow is generating high-double-digit top-line growth at a high margin.

This is an important dynamic because it shows how end-users of ServiceNow are outlaying a lot of capital up front. It's not uncommon for a software company to make a sale, and then try to cross-sell additional services after the initial deal (perhaps upon renewal of the contract). However, in ServiceNow's case, the company is doing a great job of penetrating customers more deeply during the early stages of customer acquisition. By capturing this level of customer value so early, ServiceNow is active beyond just one layer of the tech stack and evolving into what McDermott describes as a full-spectrum "intelligent super platform."

The chart below illustrates the price-to-free-cash-flow multiple for SeviceNow versus that of peers Salesforce.com, Workday, Atlassian, and Snowflake.

NOW Price to Free Cash Flow data by YCharts

Interestingly, ServiceNow's price-to-free-cash-flow multiple of 52 puts it in the middle of its peer set. Salesforce.com trades at a meaningful discount by this measure, but I'd argue that is because the company is more mature and less of a growth stock.

Given the overall haziness of the macroeconomy, I'd say ServiceNow is performing extremely well. I believe the stock has much more room to run due to the heightened interest in AI and its various use cases. As of now, companies are still spending quite a bit of time figuring out exactly how new breakthroughs in generative AI can best serve the business. For this reason, it's appropriate to think that ServiceNow's place in IT budgets and its role in the AI journey is just beginning.

Long-term investors should be excited about the company's ability to thrive in a market primarily dominated by big tech. I think that now is a terrific opportunity to initiate a position in ServiceNow, given its inroads into AI, strong financial position, and attractive valuation relative to its peers.

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fools board of directors. John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fools board of directors. Adam Spatacco has positions in Alphabet, Amazon, and Microsoft. The Motley Fool has positions in and recommends Alphabet, Amazon, Atlassian, Microsoft, Salesforce, ServiceNow, Snowflake, and Workday. The Motley Fool recommends Gartner. The Motley Fool has a disclosure policy.

Read the original post:

1 Under-the-Radar Artificial Intelligence (AI) Stock to Buy Hand Over ... - The Motley Fool

What Is Consciousness, Really? Either AI Rules or Human Spirit … – CEOWORLD magazine

Consciousness is one of the most pressing issues of our time. Right up there with climate.

With AI taking off, tech experts are worried that intelligent machines will gain consciousness within the next few years! Meanwhile, we experience life basically in our conscious minds, yet we have only the faintest inkling of how this mysterious phenomenon works, or even what it is. For these reasons, and more, its time to address the question: What is consciousness, really?

For example:

Is human consciousness simply the physical bits of information processed within our brains circuitry? An electrochemical, living intelligence? Like AI only different?

If AI surpasses human intelligence, what happens to us mere mortals? To we, the people?

How does machine consciousness square with religious beliefs? Is spirituality simply an illusion? Is God a fantasy?

To examine these questions, aTechCast studyused collective intelligence to combine background data and the best judgment from 30 experts. Weve used this method for 20 years to forecast emerging technologies and social trends with good accuracy. For instance, TechCast publishedforecastsmore than a decade ago that AI would take off in 2023.

The best way to make sense of these results is to sketch two different propositions describing implications of opposing views.

Proposition 1: AI dominates human consciousness

Yes, this is a stark statement, but its also the logical outcome of the view that sees no significant difference between human consciousness and intelligent forms of AI.

The belief underlying this view is that consciousness is the intelligence shown by any sufficiently complex system. The corollary belief is that human consciousness is simply an outcome of information in the brain. And since AI will soon have near infinite power to process information, some form of Artificial Super Intelligence (ASI) is likely to eclipse human intelligence. From this view, it follows that humans can expect to become inferior to AI, most jobs will be automated, and the vast powers of ASI could threaten humanity.

While this is an elegant theory, its implications are so impossible that they seem to refute the theory. For instance, sound solutions to the climate crisis are well known, but the obstacles are due to procrastination, self-interest, and the lack of political will.How could the most powerful ASI possibly overcome these utterly human foibles?Or could sheer machine intelligence reduce mass shootings? Reconcile opposing sides to the interminable conflict on abortion? In short, the most intelligent AI is not likely to rule the world.

I hope the point is clear. Intelligence the ability to manage objective knowledge is fundamentally different than the subjective ability to resolve the messy, intractable dilemmas that confound humans. Julian Taylor, Software Engineer at Sun Microsystems, highlighted the limits of AI: No algorithm that I have devised has ever developed an unpredictable goal. This is simply not what these algorithmic systems do. And Kurt Gdel, the famous scientist who proved his Incompleteness Theorem, concurred: No conceivable collection of algorithms can possibly manifest human self-aware consciousness.

The meaning of this limitation is profound the most intelligent machine cant be endowed with agency, the ability to exercise free will, to act independently. Yes, its almost given that AI will soon be able to model emotions, values, beliefs, and other human qualities. But theyll be just that simple simulations of human consciousness. Not the real thing. The most brilliant machine intelligence seems doomed to lack agency. For an everyday example, your GPS car navigation system may be brilliant at leading you somewhere, but you must tell it your destination. Only you have agency!

Of the endless brilliant robots and semi-conscious AI systems out there, none are capable of truly independent behavior. For a simple example, your pet Roomba may sweep your floors with great abandon, but only within its programmed limits. Cute, but it doesnt have a conscious mind, really. No agency. ChatGPT? Nope. You have to tell it what to do. Even IBMs powerhouse, Newton, able to beat the top chess masters no agency. Without some dramatic as yet unknown breakthrough in AI,the concept of machine intelligence with human consciousness remains only a theory lacking support.

Yet almost all AI experts are convinced that AI superpowers will eclipse humans. Yuval Harari leads this wave of fear with his belief that AI is an alien species that could trigger humanitys extinction. This blind faith fuels the techno hype reminiscent of the mass hysteria we saw whenY2Kthreatened to destroy civilization at the year 2000 AD. As the critical turn of the century passed nothing happened! This study provides a sober vision that is realistic. Its time to think of AI as simply a powerful tool to be managed carefully.

This impasse in the logic of AI superiority leads to a second proposition that resolves this contradiction.

Proposition 2: Human spirit transcends AI

The lack of AI agency stands in sharp contrast to what Websters dictionary calls human spirit. We could call it the self, or we could think of it as the soul. Whatever it is, knowing that something in human consciousness is more powerful than information helps make sense of our world today.

A solid majority of our experts think that mood shifts, altered awareness, free will, and other states of mind transcend the physical body. Nobel Laureate Roger Sperry summed it up, The mind acts as an independent force.

This view also affirms the belief of all religions that humans are spiritual beings. Albert Einstein himself said, The most profound emotion we can experience is the mystical some spirit is manifest in the laws of the universe. And cognitive scientist David Chalmers thinks, We are likely to discover that consciousness is a fundamental property of the universe, like space, time and gravity.

Once we accept this special role of human spirit, the dilemmas noted above fade into a coherent story of the future. Sure, ASI is almost certain to vastly exceed our feeble ability to manage the overwhelming complexity of modern life. But thats okay because well be there to guide it. To design the systems using principles that ensure their safe behavior. To monitor them carefully and take action to avert problems.

Not only can we manage this AI-human symbiosis, the resulting freedom from todays mind-numbing knowledge work will unleash even more human freedom. More creativity. More awareness. If we can summon the courage and global consciousness to surmount the enormous challenges ahead, we might even see the flowering of human spirit.

Written by William E. Halal.

Have you read?Top Women CEOs of Americas largest public companies (2023 List).CEOs Of The Top Footwear Companies You Should Know.Top CEOs of the Worlds Largest Media Companies In 2023.Best International High Schools In The World, 2023.Revealed: The Worlds Best Airline CEOs, 2023.

Follow us on Social Media

Read the rest here:

What Is Consciousness, Really? Either AI Rules or Human Spirit ... - CEOWORLD magazine

An AI just negotiated a contract for the first time ever and no human was involved – CNBC

Mathisworks | Digitalvision Vectors | Getty Images

In a world first, artificial intelligence demonstrated the ability to negotiate a contract autonomously with another artificial intelligence without any human involvement.

British AI firm Luminance developed an AI system based on its own proprietary large language model (LLM) to automatically analyze and make changes to contracts. LLMs are a type of AI algorithm that can achieve general-purpose language processing and generation.

Jaeger Glucina, chief of staff and managing director of Luminance, said the company's new AI aimed to eliminate much of the paperwork that lawyers typically need to complete on a day-to-day basis.

In Glucina's own words, Autopilot "handles the day-to-day negotiations, freeing up lawyers to use their creativity where it counts, and not be bogged down in this type of work."

"This is just AI negotiating with AI, right from opening a contract in Word all the way through to negotiating terms and then sending it to DocuSign," she told CNBC in an interview.

"This is all now handled by the AI, that's not only legally trained, which we've talked about being very important, but also understands your business."

Luminance's Autopilot feature is much more advanced than Lumi, Luminance's ChatGPT-like chatbot.

That tool, which Luminance says is designed to act more like a legal "co-pilot," lets lawyers query and review parts of a contract to identify any red flags and clauses that may be problematic.

With Autopilot, the software can operate independently of a human being though humans are still able to review every step of the process, and the software keeps a log of all the changes made by the AI.

CNBC took a look at the tech in action in a demonstration at Luminance's London offices. It's super quick. Clauses were analyzed, changes were made, and the contract was finalized in a matter of minutes.

There are two lawyers on either side of the agreement: Luminance's general counsel and general counsel for one of Luminance's clients research firm ProSapient.

Two monitors on either side of the room show photos of the lawyers involved but the forces driving the contract analysis, scrutinizing its contents and making recommendations are entirely AI.

In the demonstration, the AI negotiators go back and forth on a non-disclosure agreement, or NDA, that one party wants the other to sign.NDAs are a bugbear in the legal profession, not least because they impose strict confidentiality limits and require lengthy scrutiny, Glucina said.

"Commercial teams are often waiting on legal teams to get their NDAs done in order to move things to the next stage," Glucina told CNBC."So it can hold up revenue, it can hold up new business partnerships, and just general business dealings. So, by getting rid of that, it's going to have a huge effect on all parts of the business."

Legal teams are spending around 80% of their time reviewing and negotiating routine documents, according to Glucina.

Luminance's software starts by highlighting contentious clauses in red. Those clauses are then changed to something more suitable, and the AI keeps a log of changes made throughout the course of its progress on the side. The AI takes into account companies' preferences on how they normally negotiate contracts.

For example, the NDA suggests a six-year term for the contract. But that's against Luminance's policy. The AI acknowledges this, then automatically redrafts it to insert a three-year term for the agreement instead.

Glucina said that it makes more sense to use a tool like Luminance Autopilot rather than something like OpenAI's software as it is tailored specifically to the legal industry, whereas tools like ChatGPT and Dall-E and Anthropic's Claude are more general-purpose platforms.

That was echoed by Peel Hunt, the U.K. investment bank, in a note to clients last week.

"We believe companies will leverage domain-specific and/or private datasets (eg data curated during the course of business) to turn general-purpose large language models (LLMs) into domain-specific ones," a team of analysts at the firm said in the note.

"These should deliver superior performance to the more general-purpose LLMs like OpenAI, Anthropic, Cohere, etc."

Luminance didn't disclose how much it costs to buy its software. The company sells annual subscription plans allowing unlimited users to access its products, and its clients include the likes of Koch Industries and Hitachi Vantara, as well as consultancies and law firms.

Founded in 2016 by mathematicians from the University of Cambridge, Luminance provides legal document analysis software intended to help lawyers become more efficient.

The company uses an AI and machine-learning-based platform to process large, complex and fragmented data sets of legal documentation, enabling managers to easily assign tasks and track the progress of an entire legal team.

It is backed by Invoke Capital a venture capital fund set up by U.K. tech entrepreneur Mike Lynch Talis Capital, and Future Fifty.

Lynch, a controversial figure who co-founded enterprise software firm Autonomy, faces extradition from the U.K. to the U.S. over charges of fraud.

He stepped down from the board of Luminance in 2022, though he remains a prominent backer.

Read the original here:

An AI just negotiated a contract for the first time ever and no human was involved - CNBC

‘What do you think of AI?’ People keep asking this question. Here’s five things the experts told me – ABC News

For the last few months, there's one question that I've been asked countless times.

It comes up without fail during idle moments: coffee breaks at work orstanding around, outat the dogpark.

What do you think about AI?

Usually, the tone is quietly sceptical.

For me, the way it's asked conveys a weary distrust of tech hype, but also a hint of concern. People are asking:Should I be paying attention to this?

Sure, at the start of 2023, many of us were amazed by newgenerative artificial intelligence (AI)tools like ChatGPT.

But, as the months have passed, these tools have lost their novelty.

The tech industry makes big claims abouthow AI is going tochange everything.

But this is an industry that has made big claims before and been proved wrong. It's happened with virtual reality, cryptocurrency, NFTs and the metaverse. And that's just in thepast three years.

So, what do I think of AI?

For the pastfew months I've been working on a podcast series about AI for the ABC, looking broadly at this topic.

It's been a bit like trekking through a blizzard of press releases andproduct announcements.

Everything solid dissolves into a white-out of jargon and dollar signs.

There's so much excitement, and so much money invested, that it can be hard to get answers to the big underlying questions.

And, of course, we're talking about the future! That's one topic on which no-one ever agrees,anyway.

But here's what I've learned from speaking to some of the top AI experts.

Forget Terminator. Forget 2001: A Space Odyssey.

Hollywood's long-ago visions of the futureare getting in the way of understanding the AI we have today.

If you picture a skeletal robot with red eyes every time someone says "AI", you'll have totally the wrong idea about what AI can do, what it can't, and what risks we should reasonably worry about.

Most of the AI tools we use, from ChatGPT to Google Translate, are machine learning (ML).

If AI isthe broad concept of machines being able to carry out tasks in what that we would consider "smart", ML is one way of achieving this.

The general idea is that, instead of telling a machine how to do a task, you give them lots of examples of wrong and right ways of doing the task, and let them learn for themselves.

So for driverless cars, you give a ML systemlots of video and other data of cars being driven correctly, and itlearns to do the same.

For translation, you give a ML toolthe same sentences in different languages, and it figures out its own method of translating between the two.

Why does this distinction between telling and learning matter?

Because a ML tool that can navigate a roundaboutor help you order coffee in French isn't plotting to take over the world.

The fact it can do these narrow tasks is very impressive, but that's all it's doing.

It doesn't even "know" the world exists, says Rodney Brooks, a world-leading Australian roboticist.

"We confuse what it does with real knowledge," he says.

Rodney Brooks has one of the most impressive resumes in AI. Born, raised and educated in Adelaide, during the 1990s heran the largest computer science department in the world, at MIT. He's even credited with inventing the robotic vacuum cleaner.

"Because I've built more robots than any other human in the world,I can't quite be ignored,"he told me when I called him at his home in San Francisco, one evening.

Professor Brooks, who's a professor emeritus at MIT, says the abilities of today's AI, though amazing, arewildly over-estimated.

He makes a distinction between "performance" and "competence".

Performance is what the AI actually does translate a sentence for example. Competence is its underlying knowledge of the world.

With humans, someone who performs well is also generally competent.

Say you walk up to a stranger and ask them for directions. If they answer with confidence, we figure we can also ask them other things about the city: where's the train station? How doyou pay for a ticket?

But that doesn't apply to AI. An AI that can give directions doesn't necessarily know anything else.

"We see ChatGPT do things ... and people say 'It's really amazing'. And then they generalise and imagine it can do all kinds of things there's no evidence it can do," Professor Brooks says.

"And then we see the hype cycle we've been in over the last year."

Another way of putting this is we have a tendency to anthropomorphise AI to seeourselves in the tools we've trained to mimic us.

As a result, we make the wrong assumptions about the scale and type ofintelligence beneath the performance.

"I think it's difficult for people, even within AI, to figure out what is deep and what is a technique," Professor Brooks says.

Now, many people in AI sayit's not so clear cut.

Rodney Brooks and others maybe completely wrong.

Maybe future, more advanced versions of ChatGPT will havean underlying model of the world. Performance will equate to competence.AI will develop a general intelligence, similar to humans.

Maybe. But that's a big unknown.

For the moment, AI systems are generally very narrow in what they can do.

From the buzz out of Silicon Valley, you could be forgiven for thinking the course of the future is pretty much decided.

Sam Altman, the boss of OpenAI, the company that built ChatGPT,has been telling everyonethat AI smarter than any human is right around the corner. He calls this dream Artificial General Intelligence, or AGI.

Perhaps as a result of this, minor advances are oftencommunicated to the public as though they're proof that AI is becoming super-intelligent.The future is coming, get out of the way.

ChatGPT can pass a lawexam? This changes everything.

Google has a new chatbot?This changes everything

Beyond this hype, there are lots of varying, equally valid, expert perspectives on what today's AI is on track to achieve.

The machine learning optimists,people likeSam Altman, are just one particularly vocal group.

They say that not only will we achieve AGI, but it will be used for good, ushering in a new age ofplenty.

"We are working to build tools that one day can help us make new discoveries and address some of humanity's biggest challenges, like climate change and curing cancer," Mr Altman told US lawmakers in May.

Then, there's thedoomers.They broadly say that, yes, AI will be really smart, but it won't be addressing climate change and curing cancer.

Some believe that AI will become sentient and aggressively pursue its own goals.

Other doomers fearpowerful AI tools will fall intothe wrong hands and be misused to generate misinformation, hackelections, and generally spread murder and mayhem.

Then there's the AI sceptics. People like Rodney Brooks.

The real danger, they say, isn't that AI will be too smart, but it will be too dumb, and we won't recognise its limits.

They point to examples of this happening already.

Driverless carsare crashing into pedestrians in San Francisco. Journalists are being replaced by faultybots. Facial recognition is leading to innocent people being locked up.

"Today's AI is a very powerful trick," Professor Brooks says.

"It's not approaching, or it's not necessarily even on the way, to a human-level intelligence."

And there's a fourth group (these groups overlap in complicated ways), who saythat all of the above misses the point.

We should worry less about what AI will become, and talk more about what we want it to be.

Rumman Chowdhury, an expert in the field of responsible AI, says talking about the future as something that will happen to us, rather than something we shape, is a cop out by tech companies.

AI isn't a sentient being, but just another tech product.

"In anthropomorphising and acting like artificial intelligence is an actor that makes independent decisions, people in tech absolve themselves of the sins of the technology they built," she says.

"In their story, they're a good guytrying to make this thing to help people.

"They've made us believe this AI is alive and making independent decisions and therefore they're not at fault."

Most of the popular discussion about AI and the future focuses on what happens when AI gets too powerful.

This is sometimes called the "alignment problem". It's the idea that, in the end, sentient AIwill not do what we what.

Within the AI community, the term "p(doom)" is used to describe the probability of this happening. It's a percentage chance that AI is going to wipe out humanity."My (p)doom is 20 per cent" etc.

But the most chilling vision of the future I heard wasn'tone whererobots stage an uprising.

Instead, it was muchmore mundane and plausible. A boring dystopia.

It's a future where AI pervades every aspect of our lives, from driving a car to writing an email, and a handful of companies that control this technology get very rich and powerful.

Maybe in this future AI issuper-intelligent, or maybe not. But it's at least good enough to displace workersin many industries.

New jobs are created, but they're not as good, because most peoplearen't aseconomically useful as they were. The skills these jobs require skills that were once exclusively human can be done by AI.

High-paying, creative jobs become low-paying ones, usually interacting with AI.

This is the fear that partly motivatedUS actors and screenwriters to go on strikethis year. It's why someauthors are suing AI companies.

It's a vision of the future where big tech's disruptions of certain industries overthe past 20 years Google and Facebooksuckingadvertising revenue out media and publishing, for instance is just thepreamble to a much larger,global transfer of wealth.

"The thing I worry about is there are fewer and fewer people holding more and more wealth and power and control," Dr Chowdhury says.

"As these models becomemore expensive to build and make, fewer and fewer people actually hold the keys to what's going to be driving essentially the economy of the entire world."

Michael Wooldridge, a computer scientist at Oxford University and one of the world's leading AI researchers, is also worried about this kind of future.

The future he envisions is less like The Terminator, and more like The Office.

Not only are most people paid less for the same work, but they're micromanaged by AI productivity software.

In this"deeply depressing" scenario,humans are the automata.

"A nagging concern I have is that we end up with AI as our boss," Professor Wooldridge says.

"Imagine in a very near future we've got AI monitoring every single keystroke that you type. It's looking at every email that you send. It's monitoring you continually throughout your working day.

"I think that future, unless something happens, feels like it's almost inevitable."

Sixty years ago, in the glory days of early AI research, some leading experts were convinced that truly intelligent, thinking machines were a decade or two away.

About 10 years later, in the early 1980s, the same thing happened: A few breakthroughs led to a flurry of excitement. This changes everything.

But as we know now, it didn't change everything. The future that was imagined never happened.

The third AI boom started in the 2010s and has accelerated through to 2023.

It's either still going, or tapering off slightly. In recent months, generative AI stocks have fallen in the US.

ChatGPT set the record for the fastest selling user base ever, in early 2023. But it hasn't maintained this momentum. Visits to the sitefell from June through to August this year.

To explain what's going on, some analysts have referenced Amara's Law, which statesthat we tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.

They've also pointed to something called the Gartner Hype Cycle, which is a graphical representation of the excitement and disappointment often associated with new technologies.

Continued here:

'What do you think of AI?' People keep asking this question. Here's five things the experts told me - ABC News

Hacking the future – Harvard School of Engineering and Applied Sciences

Emmanuel Rassou wants to make it easier to learn foreign languages. Joshua Zhang finds it easier to get up in the morning with real-time encouragement from peers. Matt Tengtakrool believes satellites can play a crucial role in disaster response. Sahar Maisha doesnt like to see restaurant food go to waste.

All four are students at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) who recently spent a weekend designing products at HackHarvard 2023 at the Science and Engineering Complex (SEC). They were among the more than 600 students from around the world who participated in the eighth edition of the hackathon and the first at the SEC.

Teams had 36 hours to design, test and pitch finished products inspired by the overall theme, Hack to the Future. Teams submitted products in one of four tracks: Health and Fitness, Earth and Space, So You Think You Can Do It Better, and Efficiency Boosters. The event also featured career and technology panels and social events such as yoga and tote bag painting.

Overall, Im super happy with the event, said director Yuen Ler Chow, a third-year computer science concentrator at SEAS. Our talks went well and were very well-attended; many were at near- to full-capacity. We sent feedback forms to our hackers, and the overwhelming majority of them were super satisfied with the event.

Hack Harvard is a SEAS-affiliated student organizations managed by the SEAS Office of Student Experiencesand the Office of Academic Programs. Read on to learn more about HackHarvards projects and prize winners.

Rassou, a freshman studying computer science at SEAS, was part of the team that built Substitutor, a Chrome extension that encourages passive vocabulary learning. The extension converts specific chunks of text on a web page into a foreign language, and the user can read the original text by hovering the cursor over it.

Rassous team also included Beverly Wan from the National University of Singapore, Aayush Gautam from University of Southern Mississippi and John Tawfik from Rutgers University. The team submitted in the Efficiency Boosters Track.

Initially we didnt have a project idea, so the concept creation phase and brainstorming was stressful, Rassou said. We thought we were very behind compared to other teams who thought of an idea beforehand. During the last hours, we were coding on pure adrenaline, so we were too tired to even notice any stress.

Sustainabite, another Earth and Space submission, connects restaurants with non-profit organizations such as food banks and homeless shelters. Participating restaurants post specific food left over when their kitchens close each day, and organizations within a 10-mile radius get notified of this update and can request the food. Once a request is accepted and confirmed, a volunteer from the organization picks it up within the hour. Maishas team included New York University student Catherine Huang, Rocio Cotta Antunez from the University of Minnesota-Minneapolis, and Sarah Asad from the University of Washington.

This was my first overnight, in-person hackathon and it was unlike any hackathon I've ever participated in before, said Maisha, a sophomore studying computer science at SEAS. Working with new people was a lot of fun, and I left the hackathon with two things: new friendships, and a project that brought our team together.

Tengtakrool, a third-year computer science and statistics student at SEAS, built DRIFTS.space with University of Texas-Austin student Ankit Devalla and Texas A&M student Montgomery Bohde. DRIFTS stands for Disaster Relief Infrastructure For Tracking and Safety. The Earth and Space submission analyzes the positions and capabilities of more than 10,000 satellites currently in orbit to determine the ideal positioning to provide real-time data and support for first responders in a region struck by a natural disaster.

Hackathons are truly special compared to typical school projects because they force you to ship actionable products really fast, Tengtakrool said. I think that they force you to be very resourceful, coordinated, and think on your feet.

Wakey, winner of the Funniest Hack Prize, was submitted in So You Think You Can Do It Better. The app links users who have all set an alarm for the same time, and as one user gets up, theyre able to send additional sound effects or prompts to help everyone else get up. Zhang, a second-year student studying applied math and computer science, built the app with fellow SEAS students Eric Wang and Pedro Garcia, and Princeton student Emily Luo.

I wanted to gain some computer science background and get more immersed in the culture here, Zhang said. We already had a concept in mind, so we basically just got right into the implementation.

Colorado College students Ronan Takizawa, Primera Hour, David Prelinger and Kylie Bogar won Best Overall Hack with TeleSpeech, a Chrome extension that converts Telegram messages into custom AI-generated speech. Second place went to HackAnalyzer, which uses artificial intelligence insights to help hackathon judges evaluate submissions.

The Think You Can Do It Better Prize went to giraffe.study, which uses artificial intelligence to quickly create high-quality videos explaining specific topics. Its design team was from Princeton University, Columbia University and Carleton College.

The Health & Fitness Prize went to GREENTRail, whose design team included students from Parsons School of Design in New York, Indiana University-Bloomington and Northeastern University. The mobile app helps hikers find trails based on a synthesized difficulty rating, while also using wildlife data to suggest routes less likely to affect local fauna.

WaterView won the Earth and Space Prize. Designed by Harvard student Gaurang Goel and several students from the University of Texas, WaterView provides real-time water quality tracking and predictive analysis.

A pair of University of Connecticut students teamed up to win the Efficiency Boosters category with SnipStudy. The program creates summaries of longform videos such as lectures, and with further development could even enable students to search for specific video segments based on keywords or phrases.

See original here:

Hacking the future - Harvard School of Engineering and Applied Sciences

1 Super Semiconductor Stock Down 30% You’ll Wish You’d Bought … – The Motley Fool

The development of popular artificial intelligence (AI) applications like OpenAI's ChatGPT wouldn't be possible without advanced semiconductors (chips). They are fitted inside data centers managed by cloud providers like Amazon, which are rented by AI developers to train their models.

Semiconductor giant Nvidia has an estimated 90% market share in the emerging industry for AI data center chips, but Advanced Micro Devices (AMD 1.52%) recently launched its own line of competing hardware.

AMD has struggled over the last 12 months because its core business involves selling chips for segments like gaming and personal computing -- precisely where consumers have cut spending amid challenging economic conditions. As a result, AMD stock trades 30% below its all-time high.

But the company just reported its financial results for the third quarter, and investors have plenty to look forward to, including billions of dollars in potential sales of its new AI chips in 2024. Here's why now is a great time for investors to buy the dip.

Image source: Getty Images.

AMD is one of the most renowned semiconductor companies in the world. Its chips can be found in a long list of leading consumer electronics, including high-end computers, top gaming consoles like Sony's PlayStation 5 and Microsoft's Xbox, and even the infotainment systems in Tesla's electric vehicles.

But there's a greater opportunity on the horizon for AMD. Nvidia CEO Jensen Huang says $1 trillion worth of existing data center infrastructure needs upgrading to support accelerated computing and AI, and while his company currently has almost the entire market to itself, AMD is focused on snatching some market share.

Earlier this year, AMD revealed its new MI300 lineup of data center chips designed for AI workloads. The company says its MI300A variant combines computing processors (CPUs) with graphics processors (GPUs) to create the world's first accelerated processing unit (APU) for data centers. AMD began shipping the MI300A to the Lawrence Livermore National Laboratory last month to power the El Capitan supercomputer, which will be the most powerful on the planet when it comes online next year.

AMD says its data center GPU chips alone could exceed $2 billion in revenue in 2024, which would make the MI300 the fastest product to ramp to $1 billion in sales in the company's history.

But AMD is also tackling a new frontier, which is AI-enabled personal computers and devices. Its Ryzen AI lineup of chips is now available in 50 notebook designs with more coming thanks to collaborations with Microsoft, which already has a history of using AMD hardware in its Surface devices. Ryzen AI will allow computers to run AI workloads on-device, which can make them far more responsive than relying on cloud-based applications hosted by external data centers.

Through the first six months of 2023, AMD's revenue sank by 13.8% on a year-over-year basis. It was led by a whopping 59% drop in its client segment, which includes the chips the company makes for third-party computers and devices.

But in the third quarter, AMD's revenue grew by a modest 4.2% year over year, to $5.8 billion -- but this time, it was actually led higher by the client segment, which saw a 42% revenue increase on its own. The result was driven by strong demand for the Ryzen 7000 series chips, which includes the AI variants I mentioned earlier.

Data center revenue remained flat, but with large-scale MI300 shipments on the horizon, that is very likely to change soon.

AMD anticipates $6.1 billion in total revenue in the current fourth quarter of 2023, which would mark an accelerated year-over-year growth rate of 9%. It's a great sign AMD's business is now on the upswing after a challenging 12-month period.

AMD's profitability has suffered lately. With revenue falling, AMD had to carefully manage its costs while still investing in the development of its new AI chips, which created a headwind at its bottom line. It resulted in the company's non-GAAP earnings per share sinking by 45% in the first half of 2023.

But the company's non-GAAP earnings returned to growth in Q3, increasing by 4% compared to the same quarter last year. Its trailing-12-month earnings per share now stands at $2.57, and based on a current stock price of $108.79, AMD is trading at a price-to-earnings (P/E) ratio of 42 at the moment.

That's significantly more expensive than the 29 P/E ratio of the Nasdaq-100 index, which is a good benchmark for the average valuation of large technology companies.

While that sounds unattractive, investors are faced with a very different picture when they look ahead to 2024. Wall Street analysts predict the company's earnings will come in at $3.93 for the year, which places AMD stock at a far more palatable forward P/E ratio of 27.7. There might even be upside to that earnings number if the uptake of AMD's AI chips occurs at a faster rate than analysts predict, or even if interest rates fall and the economy improves faster than expected.

Therefore, AMD stock looks like a bargain right here at a 30% discount to its all-time high, especially for investors who can hold on for the next few years while AMD ramps up its AI sales.

John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. Anthony Di Pizio has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Advanced Micro Devices, Amazon, Microsoft, Nvidia, and Tesla. The Motley Fool has a disclosure policy.

Follow this link:

1 Super Semiconductor Stock Down 30% You'll Wish You'd Bought ... - The Motley Fool

Like real climate action, AI’s perils will be ignored – The New Daily

Before issuing last weeks Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence, President Joe Biden went to Camp David and watched the new Tom Cruise movie, Mission: Impossible Dead Reckoning Part One.

In the movie, Tom Cruise as Ethan Hunt battles an AI thingy called The Entity, which becomes sentient, takes over a submarine, kills everyone aboard and then threatens to use its super intelligence to control the worlds militaries. Luckily theres a two-piece key that can turn off the Entity, which Mr Cruise manages to put together and well, that seems to be for Part Two.

White House deputy chief of staff Bruce Reed, who watched the movie with the President, told Associated Press in an interview: If he hadnt already been concerned about what could go wrong with AI before that movie, he saw plenty more to worry about.

Really? Didnt he watch Terminator 2: Judgment Day, 32 years ago? Or Terminator 3: Rise of the Machines, 20 years ago?

Either of those movies would have given the President plenty to worry about long ago. But timing is everything in politics, and the time for worrying about AI is 2023, not 1991 or 2003. Then it was science fiction, now ChatGPT has made it real.

Five months ago, on May 30, 352 of the worlds leading AI scientists and other notable figures (now its up to 662) signed the following succinct Statement on AI Risk: Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

Extinction anxiety

A gulp went around the world; everyone looked up from their phones briefly at the word extinction and then went back to looking at their phones. Most of those who signed the statement warning about extinction went back to developing AI as fast as they could.

But it seems to have had quite an impact on the White House. Teams were set up to craft something, and it was decided that it should be a presidential executive order.

The President was alarmed by the evil, sentient Entity, probably not long after the movie came out in mid-June, and put a rocket up the AI policy teams. Last Monday, four months later, a gigantic 19,704-word executive order was emitted from the White House to deal with the risks of AI.

Will it do that? Well, it certainly is very long, and very prescriptive, so it might stunt AIs growth a bit. But regulation usually favours incumbents, so if nothing else it will probably help to entrench the technology oligopoly of the big six Microsoft, Apple, Meta, Alphabet, Nvidia and Amazon.

A day after the executive order was issued, UK Prime Minister Rishi Sunak opened the two-day global AI Safety Summit at Bletchley Park in Buckinghamshire, the headquarters of Britains code-breaking efforts in WWII.

Many of those who signed the Statement on AI Risk five months ago were there, along with a rare double from Australia: Deputy Prime Minister Richard Marles and Minister for Science Ed Husic. They all signed the Bletchley Declaration, affirming that AI should be designed, developed, deployed, and used in a manner that is safe, human-centric, trustworthy and responsible, as the subsequent media release from Marles and Husic put it.

Perhaps the most telling sentence in the declaration was the last: We look forward to meeting again in 2024. In other words, this first meeting wont achieve much apart from making a start.

After the meeting, Rishi Sunak had a live-streamed conversation with Elon Musk on the subject, in which Musk observed that the pace of AI development is the fastest of any technology in history by far. Its developing at five-fold, ten-fold per year. Governments arent used to moving at that speed.

The Musk perspective

He added that AI was the most disruptive force in history, that we will have for the first time something that is smarter than the smartest human, and there will come a point where no job is needed. You can have a job if you want a job, for personal satisfaction, but AI will be able to do everything.

I dont know if that makes people comfortable or uncomfortable, he said with a smirk.

Probably uncomfortable, Elon, although not as uncomfortable as the idea that they wont just be unemployed, theyll be extinct.

Musk didnt sign the May 30 statement on AI Risk that talked about extinction. But those who did sign it, like the CEO of ChatGPT developer, OpenAI, Sam Altman, and the CEO of Google DeepMind, Demis Hassabis, did not down tools because what they were doing was an existential risk like a pandemic or nuclear war.

They ploughed on doggedly, heroically forging mankinds path into technologys next era. In September, OpenAI announced that ChatGPT can now see, hear, and speak, Google has launched an AI feature in Gmail, and new AI entities are being launched every day, each one smarter than the one before and a little bit closer to being sentient. The industry is now talking about 2024 being the biggest year yet for AI.

Its all a bit reminiscent of the early warnings about the greenhouse effect of fossil fuels.

Climate crunch

My trusty AI assistant, Google Bard, tells me that in 1824, French physicist Joseph Fourier proposed that the atmosphere acts like a greenhouse, trapping heat from the sun and preventing it from escaping back into space, and in 1896, Swedish scientist Svante Arrhenius calculated that human emissions of carbon dioxide could lead to global warming.

Bard continued: Scientists started getting really worried in the mid 20th century. In 1957, American scientist Roger Revelle published a paper in which he warned that human activities were increasing the level of carbon dioxide in the atmosphere, and that this could lead to significant global warming and catastrophe.

And in 1972, the National Academy of Sciences issued the equivalent of the May 30 Statement on AI Risk, concluding that human activities were likely to produce an increase in the average surface temperature of the Earth, although admittedly they didnt use the word extinction.

The first global summit meeting about climate change, like the one in Bletchley Park last week, was held in Berlin in 1995, and two years later in Kyoto, a declaration was issued called the Kyoto Protocol in which everyone agreed to do something.

And here we are.

Alan Kohler writes twice a week for The New Daily. He is finance presenter on ABC News and founder of Eureka Report

Here is the original post:

Like real climate action, AI's perils will be ignored - The New Daily