Page 1,400«..1020..1,3991,4001,4011,402..1,4101,420..»

Will Microsoft and Artificial Intelligence Save the Market? – RealMoney

The market experienced a significant shift on Tuesday. There were several weak reports like that from UPS (UPS) , and a number of strong reports like McDonald's (MCD) and General Motors (GM) as well, but the market gapped lower at the open and trended down the rest of the day.

The selling was very broad and persistent, and the dip buyers that have been so active lately stood on the sidelines and watched.

It looked quite gloomy at the close, but Microsoft (MSFT) posted an extremely strong report, and Alphabet (GOOGL) announced a substantial buyback of shares. This action is producing a substantial bounce in the Nasdaq 100 (QQQ) , which was trading up about 1% after dropping 1.9% on Tuesday.

There are still hundreds of earnings reports to come, including heavyweights like Meta (META) , Amazon (AMZN) , and Apple (AAPL) , but will they help to shore up the broad damage that is occurring in other areas of the market like Semiconductors (SMH) and Financials (XLF) ?

The problem is that the stellar report from Microsoft, and to a lesser degree Google, is company specific. Both companies are benefiting from a boom in artificial intelligence (AI). The growth there is even faster than what occurred during the internet bubble in the late 1990s, and Microsoft is the leader.

AI is going to benefit many companies in various ways, but it is not going to stop the economic cycle. The shift in market action on Tuesday was largely due to concerns about banks because of the collapse of First Republic Bank (FRC) and growing concern about economic growth. A poor report from UPS and broad weakness in trucking indicated that the economy is slowing very quickly. A poor Philly Fed report and other economic news is also a sign that things are slowing.

Another indication that a major shift is occurring is that bonds rallied sharply as equities fell and money flowed into safe plays like soft drinks and pharmaceuticals. There was a major rotation out of the stocks that are most likely to suffer from a recession, such as small-caps (IWM) and chips, and into the safety of bonds and drugs.

We have a slew of earnings ready to hit, and we will see how far Microsoft can lead the market, but the danger lies in thousands of smaller stocks that will be offset to some extent by Google and Microsoft.

(UPS, MSFT, GOOGL, AMZN and AAPL are holdings in the Action Alerts PLUS member club. Want to be alerted before AAP buys or sells these stocks? Learn more now.)

Get an email alert each time I write an article for Real Money. Click the "+Follow" next to my byline to this article.

Excerpt from:
Will Microsoft and Artificial Intelligence Save the Market? - RealMoney

Read More..

SPO Global Inc. Adopts Initiative to Implement Artificial Intelligence … – AccessWire

SHANGDONG, CHINA / ACCESSWIRE / April 26, 2023 / SPO Global Inc, (OTC PINK:SPOM) ("SPO Global Inc" or the "Company"), which operates Shandong Fangyuan Huizhong Intelligent Equipment Co., Ltd.(SFHI)

The food processing industry is being revolutionized by Artificial Intelligence (AI) and is changing the way food is produced, processed, and packaged. Shandong Fangyuan Huizhong Intelligent Equipment Co., Ltd., is integrating the power of AI into its food processing machines. Adding AI to the existing machines will unlock new levels of efficiency, precision and quality in food production.

One take away from the sugar and liquor commodities fair that the company attended last week is that food is being processed with machines powered by AI. These machines are capable of handling complex tasks with incredible speed and accuracy. This leads to great improvements in productivity and profitability for food processing companies.

We are researching the addition of machine vision in our products. Machine vision is AI-powered cameras and sensors that analyze visual data in real time, allowing food processing machines to accurately detect defects, sort, and grade products and optimize processing parameters. This helps reduce human error, minimize waste, and ensure consistent quality of food products. The AI-powered machines can identify and reject contaminated or spoiled food products providing for less risks of foodborne illnesses and product recalls.

In addition, AI algorithms can analyze massive amounts of data, including temperature, humidity, and other environmental factors, to optimize processing conditions for different food products. Another major benefit of AI algorithms is the ability to predict equipment failures thereby minimizing downtime, resulting in increased operational efficiency.

Another noteworthy advantage of using AI in food processing machines is the ability to automate repetitive tasks such as peeling, slicing, and packaging which increases productivity but also reduces the risk of food contamination by lessening human contact.

The global demand, especially in the Asis Pacific region, for safe high quality and sustainable food continues to rise. Fang Qiang, the general manager of SFHI said that he is excited to develop and implement the power of AI in its food processing machines and knows that the company's clients will benefit from increased productivity, product quality, food safety, and reduced operational costs giving them a competitive advantage in the market.

We encourage our shareholders to visit our corporate Twitter account for more updates: https://twitter.com/spo_global

About SPO Global Inc. (OTC PINK:SPOM): SPO Global Inc. recently signed a merger agreement with a leading food machinery company, Shandong Fangyuan Huizhong Intelligent Equipment Co., Ltd. (SFHI).

Company Disclaimers: As a Public Traded Company, within the guidelines of Federal and State Securities Law, SPO Global, Inc. may not avail itself of the Safe Harbor provisions as identified in the Private Securities Litigation Reform Act of 1995. However, SPO Global, Inc. provides the following disclaimer and warning to protect our shareholders, prospective investors and the public at large by alerting them to the risks and uncertainties involved with any investment, and the need to perform their own due diligence and assessment.

Forward-Looking Statements: This press release may contain "forward-looking statements" within the meaning of the Private Securities Litigation Reform Act of 1995, such as statements relating to financial results and plans for future development activities and are thus prospective. Forward-looking statements include all statements that are not statements of historical fact regarding intent, belief or current expectations of the Company, its directors or its officers. Investors are cautioned that any such forward-looking statements are not guarantees of future performance and involve risks and uncertainties, many of which are beyond the Company's ability to control. Actual results may differ materially from those projected in the forward-looking statements. Among the factors that could cause actual results to differ materially from those indicated in the forward-looking statements are risks and uncertainties associated with the Company's business and finances in general, including the ability to continue and manage its growth, competition, global economic conditions and other factors discussed in detail in the Company's periodic filings with the Securities and Exchange Commission.

Company Contact:

Jeff Peng[emailprotected]

SOURCE: SPO Global, Inc.

Read more from the original source:
SPO Global Inc. Adopts Initiative to Implement Artificial Intelligence ... - AccessWire

Read More..

Podcast | Artificial Intelligence will supercharge engineers rather … – New Civil Engineer

The rise and rise in the use of artificial intelligence (AI) in every part of our lives has led to questions about what it could mean for the way construction projects are planned, designed and delivered.

In this episode of The Engineers Collective NCE editor Claire Smith is joined by NCE reporter Rob Hakimian as co-host as they speak to Dev Amratia, who is co-founder and CEO of nPlan, which is a machine learning company that uses AI to learn how completed construction projects performed to forecast the outcomes on future projects. Dev also worked with the government to launch and deliver the national review on AI, which was published as part of the Industrial Strategy in 2017.

To set the scene for the conversation, Claire asked AI chatbot ChatGPT what Isambard Kingdom Brunel would have made of the use of AI in civil engineering and it responded in the form of a letter from Brunel.

While Dev said ChatGPTs assessment of AIs potential to advance construction was spot on there was still much to discuss on the topic. During the conversation Dev told Rob and Claire that AI is unlikely to replace engineers on projects, instead it will supercharge them and allow them to get on with the interesting bits of their work and leave the boring analysis to AI.

Dev also said that firms not engaging with AI will be left behind and gave advice for both individual engineers and firms on how to take their first steps with AI and prepare themselves for a future where AI is business as usual for the construction sector.

Listen now to hear Brunel's letter to our listeners on the impact of AI on construction and Dev's views and advice too.

The Engineers Collectiveis proving truly global in reach, with a third of listeners based outside the UK. It is also appealing to an inquisitive, career-builder demographic, with 80% of listeners under 35.

Special guests on previous episodes have included Crossrail managing director Mark Wild, HS2 Ltd special advisor Andrew McNaughton and ICE president Ed McCann. All are available for download and all address current and ongoing issues around skills and major project delivery.

The Engineers Collective is available via Apple Podcasts, Spotify, A-cast, Stitcher, PodBean and vianewcivilengineer.com/podcast

Like what you've read?To receive New Civil Engineer's daily and weekly newsletters click here.

Original post:
Podcast | Artificial Intelligence will supercharge engineers rather ... - New Civil Engineer

Read More..

Artificial intelligence: 6 ways it improves decision-making – The Enterprisers Project

In recent years, advancements in artificial intelligence (AI) models have revolutionized business intelligence. Modern businesses are built on data-driven decisions, and to leverage the value contained in data without sacrificing human resources, more and more companies are bringing AI into their workflows.

By learning from continuous data input and mimicking human behavior, AI tools are extremely trainable, adaptive, and scalable. Various tools and solutions have emerged for this sole purpose, using data on customers, employees, operations, finance, and more to help companies understand, process, interpret, and decide how to work with it.

Here are 6 ways artificial intelligence can help you make those decisions.

[ Check out our primer on 10 key artificial intelligence terms for IT and business leaders:Cheat sheet: AI glossary. ]

AI algorithms can process and analyze large amounts of data in a relatively short time and thus can be trained or used to create tools for quick and efficient decision-making.

Instead of manually evaluating data, AI can quickly and accurately analyze and compare datasets for the desired output, saving businesses time and resources while helping them make more informed decisions. Tools like ChatGPT are already being employed in companies and for mainstream use to speed up processes like content and copywriting.

AI automation can perform routine tasks based on structured data, reducing time spent on administrative work and enabling employees and leadership to focus on more relevant decision-making.

[ Related reading:How artificial intelligence can inform decision-making. ]

When structured work is delegated to AI-automated workflows, end-to-end testing can be performed, and scheduling becomes an added benefit. It avoids the risks of human error or fatigue.

AI also offers the advantage of learning and adjusting its output based on rules, actions, and triggers. Using AI tools is not just efficient but also scalable, as they can accommodate growing datasets and workflows.

AI models can make sense of large data sets and spot tendencies and nuances that may be difficult for humans to detect. They can therefore be trained to process information and quickly consider a wide range of variables and factors down to the most granular level possible, something that would take a lot of time and effort if done manually. AI tools have been used to help with tasks from forecasting for finance to anomaly detection in cybersecurity.

Human judgment is flawed; even the most skilled specialist's choices and decisions may be skewed by unconscious biases, stress, and even factors like lack of sleep and hunger.

AI can help eliminate these issues by being less prone to cognitive biases and human error. It can also produce outputs that may be unintuitive due to human perceptions based on our subjective opinions and personal worldviews.

In areas like recruitment and HR, where objectivity is crucial but bias and profiling can occur, HR tools that include AI may help overcome human biases and assumptions when selecting candidates.

[ Also read:Generative AI: 3 do's and don'ts for IT leaders ]

AI models and algorithms are designed to systematically extract information from data patterns and can be used to forecast new patterns and interpretations. These forecasts can be translated into models and simulations to help users gain better insight into the estimated outcomes. These outcomes can be continually updated and refined as more data is fed to the algorithm.

Companies can then use this information to support decision-making by predicting the outcomes or providing clear recommendations for specific situations or datasets.

For example, AI is now being used to predict things like customer behavior. When it is trained on data from human behavior measures and methods like eye-tracking, some AI applications can now predict user behavior, like attention, on creatives.

More on artificial intelligence

AI can help businesses understand their target customers better. Tools that adapt to dynamic customer behavior and intention can help companies understand the customer journey and make better marketing decisions.

Applications like natural language processing help businesses understand how customers interact with different brands, tones, and copy. Additionally, AI customer feedback tools like chatbots and search bars can unveil a better understanding of customer needs and expectations.

AI tools are the future of business intelligence as more opportunities open up. Arecent study by PwCfound that 52% of companies have implemented AI adoption plans in the last year. That trend is only expected to continue with the recent conversation around topics like generative AI.

When leaders discover the right tools or models that cater to their business needs, they can tap into the immense potential of AI, empowering them to make informed, data-based choices that fuel innovation.

[ Want best practices for AI workloads? Get theeBook: Top considerations for building a production-ready AI/ML environment. ]

View post:
Artificial intelligence: 6 ways it improves decision-making - The Enterprisers Project

Read More..

Recent FDA Discussion of Artificial Intelligence for Biosimilar Industry – JD Supra

Artificial Intelligence (AI) has long been associated with science fiction movies about dystopian futures, leading to fear among the general public about its potential impact. This is especially the case today for those in academia who have graded countless papers written by ChatGPT. However, the truth is far from what we see in the movies. In fact, one industry where AI is making significant progress is the biosimilar industry. AI offers many possibilities, including optimizing process design and process control, smart monitoring and maintenance, and trend monitoring to drive continuous improvement. Recently, the FDA has participated in discussions around AI and biotechnology.

The FDA has already played an important role in the integration of AI in the biotechnology field. It has authorized more than 500 AI/ML-enabled medical devices, but last month, the FDA made two big contributions to the conversation. The first is its publication of a discussion paper on artificial intelligence in drug manufacturing to help proactively prepare for the implementation of AI in the field.[1] The second is an article the FDA published disclosing the implementation of AI-based modeling to analyze protein aggregation in therapeutic protein drugs.[2]

1. FDA Discussion Paper Artificial Intelligence in Drug Manufacturing

In its discussion paper, the FDA requests public feedback to help inform its evaluation of the existing regulatory framework involving AI in drug manufacturing. The FDA suggests a number of areas for consideration.

One such area is standards for developing and validating AI models. The FDA admits that there are limited industry standards and FDA guidance available for the development and validation of models that impact product quality. The lack of guidance is a concern since AI has such great applicability during drug manufacturing. AI can be used in applications to control manufacturing processes by adapting process parameters based on real-time data, or in conjunction with interrogation of in-process material or the final product to: (1) support analytical procedures for in-process or final product testing, (2) support real-time release testing, or (3) predict in-process product quality attributes.

The FDA also notes the challenge applicants have in defining standards that validate an AI-based model and sustaining the ability to explain the models output and impact on product quality. As AI methods become more complex, it becomes more challenging to explain how changes in model inputs impact model outputs.

Another area for consideration is how continuously learning AI systems that adapt to real-time data may challenge regulatory assessment and oversight. AI models can evolve over time as new information becomes available. The FDA states that it may be challenging to determine when such an AI model can be considered an established condition of a process. It also may be challenging to determine the criteria for regulatory notification of changes to these models as a part of model maintenance over the product lifecycle. Applicants may need clarity on: (a) the expectations for verification of model lifecycle strategy, and (b) expectations for establishing product comparability after changes to manufacturing conditions introduced by the AI model.

Comments on these and other issues can be sent to the FDA at the link below.[3]

2. FDAs AI/Machine Learning Modeling to Ensure Safety and Demonstrate Biosimilarity

Despite the limited guidance the FDA has for AI-based technologies, it recently published a study utilizing AI for characterizing protein aggregation, which will provide a more effective means of demonstrating biosimilarity and improve safety in therapeutic protein drugs.

One major challenge that biosimilar developers face with therapeutic protein drugs is characterizing these products in order to compare them with a reference product. Characterization is particularly an issue because of protein aggregates that can create subvisible particles with a wide variety of sizes, shapes, and compositions from a variety of stress conditions. Although a small fraction of the total protein, these aggregates may increase the risk of undesirable immune responses.

The FDAs study characterized aggregate protein particles using flow imaging microscopy (FIM). This imaging technique can record multiple images of a single subvisible particle from a single sample. Although these image sets are rich in structural information, manual extraction of this information is cumbersome and often subject to human error, meaning that most of the information is underutilized.

To overcome the shortcomings of current optical image analysis, the FDA applied convolutional neural networks (CNNs), a class of artificial neural networks proven helpful in many areas of image recognition and classification. This AI/ML approach enables automatic extraction of data-driven features (i.e., measurable characteristics or properties) encoded in images. These complex features (e.g., fingerprints specific to stressed proteins) can potentially be used to monitor the morphological features of particles in biotherapeutics, and enable tracking the consistency of particles in a drug product.

CNNs can be trained with input data using supervised learning or a fingerprinting approach. For supervised learning, the AI model is trained using estimations of the most discriminatory parameters defined using images that are correctly labelled as either stressed or unstressed. Once trained, the CNN can predict which pre-defined labels best apply to a new image. The fingerprinting approach, on the other hand, is optimized to reduce the dimension of the spatially correlated image pixel intensities, resulting in a new lower dimensional (e.g., 2D) representation of each image. These lower dimensional representations can be used to analyze complex morphology encoded in a heterogeneous collection of FIM images since the full images can readily be mapped to a lower dimensional representation by the CNN.

The FDA found that flow microscopy combined with CNN image analysis could be applied to a range of products and will provide potential new strategies for monitoring product quality attributes. Such technology will enable processing of large collections of images with high efficiency and accuracy by distinguishing complex textural features which are not readily delineated with existing image processing software.

* * *

As AI becomes more advanced and more of those in the biosimilar industry utilize this technology, the more guidance the FDA will have to provide, and the sooner the better. These two contributions from the FDA indicate that it is well aware of this need and is even looking to promote AIs use across the pharmaceutical and biopharmaceutical fields.

[1] https://www.fda.gov/media/165743/download

[2] https://www.fda.gov/drugs/regulatory-science-action/artificial-intelligencemachine-learning-assisted-image-analysis-characterizing-biotherapeutics?utm_medium=email&utm_source=govdelivery

[3] https://www.regulations.gov/ Docket No. FDA-2023-N-0487

Continued here:
Recent FDA Discussion of Artificial Intelligence for Biosimilar Industry - JD Supra

Read More..

Artificial Intelligence is Advancing; ‘Future of Work’ Panel Discusses … – University of Nebraska Omaha

The second iteration of UNOs Future of Work Symposium Series focused on the rise of chatbots and artificial intelligence (AI), its growing role in society and the workplace, and the opportunities and threats facing the use of AI and automation. Hundreds gathered in the John and Jan Christensen Concert Hall inside the Strauss Performing Arts Center on Friday to hear what leading experts and professionals are taking into consideration when implementing and managing AI in their workplaces.

Michelle Trawick, Ph.D., dean of UNOs College of Business Administration welcomed Arun Rai, Ph.D., professor, director, and co-founder of the Robinson College of Business Center for Digital Innovation at the University System of Georgia, as the keynote speaker for the event.

Rai spoke to how artificial intelligence can impact the workforce through automation, or displacing human skills; augmentation, or using AI to complement skills; and creation, or developing new human skills and jobs to utilize AI.

In his remarks, Rai also discussed the importance of transparency, fairness, and ethical uses of AI. One of the emerging AI chatbot platforms, known as ChatGPT created by OpenAI, now utilizes 1 trillion parameters as part of its learning to operate. Within these parameters can be useful information to guide the algorithms, but also disinformation in addition to biased and discriminatory information. Algorithms use this information to make predictions based on data to build responses, using probabilities to determine what information should come next.

All of this leads not only to workforce needs, but opportunities for companies and organizations. Utilizing AI requires adapting to meet new needs. We are at this point in research where were looking at AI exposure in the industry, Rai said. Were looking at AI for different occupations and jobs, but distilling it down to skills, and these models fundamentally need to be dynamic. Because AI is not stagnant, labor markets are not stagnant.

Rai pointed out two key aspects that became recurring themes in his remarks and in the following panel discussion.

First, AI does not have to always replace, but can be used as a tool to work smarter and reduce disparities. Currently the largest tech companies are the biggest producers of AI content and workflows. The broad availability of AI platforms enable more people at lower skill levels to utilize AI in their own occupations and workplaces. This point essentially boils down to the importance of adapting to new technology. A quote shared by Rai stated, AI will not replace managers, but managers that use AI will replace those that do not.

Second, the true potential of implementing AI in the future of the workplace lies at the intersection of AI and other fields. Lawyers may use AI to synthesize massive amounts of legal data. Legislators and decision makers can use AI to influence public policy. The possibilities are truly endless.

Following Rais remarks, a panel comprised of researchers and leaders from area businesses and organizations took to the stage to engage in a Q&A session featuring questions from the audience. The panel was moderated by Shonna Dorsey, executive director of the Nebraska Tech Collaborative, and included:

Panelists spoke to how the emergence of automation and artificial intelligence would directly impact their industries, and how their fields have managed the introduction of previously disruptive technologies. Audience members could also answer the questions by scanning a QR code and providing their own responses.

Fernandez said he has seen an exponential increase in electricity use in recent years as a result of artificial intelligence and automation, jumping from four megawatts per year to 100 megawatts per year. He expects that total to double by the end of the decade.

The majority of that growth is coming to us because of the data centers, whether they are big data centers around the region or data in servers or peoples computers, he said. Data is altering our industry because we have to power the AI, and that AI doesnt work without electricity.

Brown said the field of logistics has been transformed by advances in automation.

If you think of a logistics operator or a logistics manager from 20 years ago to now, that person was probably sitting with their team and were spending a majority of their time trying to find the right data so they can make one or two decisions per day, she said. Today, that same team is spending very little time on finding the data because all that data is readily available to them, and theyre rapidly making decisions.

True to the forums name, the panelists discussed what AI and automation means for their workforce. Could these evolving technologies exacerbate a workforce crisis where there arent nearly enough positions to go around? Or would these advances lead to changes in roles or new positions that previously never existed?

Elson, who leads information science and technology research initiatives for NCITE, said that while there are many workforce benefits that come with AI, there are just as many risks to consider.

This is leading to some potential concerns around the novelty of new attacks new attack types that weve never conceived of and are having difficulty anticipating, and the essential need to train individuals at entry level, he said.

Although the technologies are powerful and impressive on their own, Murphy said, they will only be as impactful as the people who use them allow them to be.

If we dont recognize that human nature will control how we use it, were not going to adapt. Were not going to harness it. Were not going to profit from it, he said.

The Future of Work Symposium Series at UNO began in fall 2022 as a series of ongoing conversations discussing critical topics influencing how, why, and where we work. Through conversations with leaders from the public, private, nonprofit, and education sectors, this series will continue to shed light on big challenges facing the workplace and share through-provoking insights on the future of the workforce.

Information about upcoming events in the Future of Work Symposium Series will be published on the UNO website as it becomes available.

The rest is here:
Artificial Intelligence is Advancing; 'Future of Work' Panel Discusses ... - University of Nebraska Omaha

Read More..

AI Boom: 2 Artificial Intelligence Stocks Billionaire Investors Are … – The Motley Fool

The concept of artificial intelligence (AI) has been around since the 1930s, first introduced by noted mathematician and computer scientist Alan Turing, who also helped develop the first modern computer and the first algorithms. However, recent advances in the field of generative AI have made headlines, and people are captivated. The capabilities exhibited by OpenAI's ChatGPT went far beyond anything most could have imagined, sparking a wave of public interest -- but this could be just the beginning.

Cathie Wood and her team at Ark Investment Management have been running the numbers and concluded that AI software could represent a $14 trillion revenue opportunity by the close of the decade.

Some of the most successful hedge fund billionaires are looking for a way to capitalize on the fervor, buying up shares of companies best positioned to profit from the growing integration of AI into every facet of our lives. Here are two AI-related stocks billionaires are buying hand over fist.

Image source: Getty Images.

Billionaire philanthropist and hedge fund manager Seth Klarman is something of a value investing legend, called "the most successful and influential investor you have probably never heard of," says The New York Times. Klarman authored the book Margin of Safety, Risk-Averse Investing Strategies for the Thoughtful Investor, which sold just 5,000 copies and has long since been out of print. However, those looking to own the book often pay hundreds or even thousands of dollars for the privilege of buying a used copy.

Klarman's Baupost Group, with more than $30 billion in assets under management, recently made a big bet on e-commerce titan Amazon (AMZN 1.98%). In the fourth quarter, the hedge fund bought 742,000 shares, increasing its stake by 299%, bringing its total holdings to 990,000 shares, currently worth roughly $104 million.

The AI connection is clear. In its 2022 letter to shareholders, CEO Andy Jassy noted that Amazon has been using machine learning (a type of AI) "extensively for 25 years ... in everything from personalized e-commerce recommendations to fulfillment center pick paths, to drones for Prime Air, to Alexa, to the many machine learning services AWS [Amazon Web Services] offers." He went on to say that Amazon plans to continue to "invest substantially" in AI, saying it will "transform and improve virtually every customer experience."

Amazon also announced the debut of its own generative AI service -- dubbed Bedrock -- to AWS cloud customers. Users will be able to access the company's Titan large language model (LLM) -- similar to the technology that powers ChatGPT -- and customize it based on their needs. In a recent interview with CNBC's Squawk Box, Jassy noted that "really good" LLMs cost "billions of dollars" and take "many years" to train, and most companies simply don't have the resources.

To be clear, it probably wasn't the AI connection that caught Klarman's eye, but rather Amazon's strong business and historically low valuation. The stock currently trades for roughly 2 times sales, squarely in the range of a bargain price-to-sales ratio of between 1 and 2. Plus, the last time Amazon stock was this cheap was nearly eight years ago, in early 2015.

Many investors are familiar with the name Ken Griffin, billionaire founder and CEO of hedge manager Citadel Advisors. He had already achieved legendary status on Wall Street for predicting the 1987 market crash, but recently added a new accomplishment to his resume: In 2022, Citadel became the most profitable hedge fund in history, producing $16 billion in gains.

Citadel Advisors bet heavily on creative kingpin Adobe (ADBE -3.26%) in the fourth quarter, snapping up 802,267 shares, an increase of 96%. That brings the total to more than 1.57 million shares, currently worth more than $598 million.

Adobe was early to jump on the AI bandwagon, introducing its Sensei platform in 2016. The system provided a set of AI tools to creators, helping them search for and identify images, alter digital facial expressions, or categorize their creations, among many other features.

That long-term tradition continues with the recent debut of Firefly, a generative AI editing tool for creators. The suite of AI models will initially focus on "the generation of images and text effects," and will be integrated across Adobe's Creative Cloud, Document Cloud, Experience Cloud, and Adobe Express, according to the press release. The use of text-based prompts will enhance and accelerate the creative process, while cutting down on the amount time wasted doing menial but necessary creative tasks. These tools are already available in beta, and Adobe will be soliciting feedback from users to help improve its utility.

It also doesn't hurt that despite the stock's 750% gains over the past decade, Adobe is currently selling for roughly 9 times sales. For context, the valuation hasn't been that cheap since early 2017, which likely factored into Griffin's decision.

John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fools board of directors. Danny Vena has positions in Adobe and Amazon.com. The Motley Fool has positions in and recommends Adobe, Amazon.com, and New York Times. The Motley Fool recommends the following options: long January 2024 $420 calls on Adobe, short April 2023 $38 calls on New York Times, and short January 2024 $430 calls on Adobe. The Motley Fool has a disclosure policy.

Continue reading here:
AI Boom: 2 Artificial Intelligence Stocks Billionaire Investors Are ... - The Motley Fool

Read More..

Landmark Supreme Court case could have ‘far reaching implications’ for artificial intelligence, experts say – Fox News

An impending Supreme Court ruling focusing on whether legal protections given to Big Tech extend to their algorithms and recommendation features could have significant implications for future cases surrounding artificial intelligence, according to experts.

In late February, the Supreme Court heard oral arguments examining the extent of legal immunity given to tech companies that allow third-party users to publish content on their platforms.

One of two cases, Gonzalez v. Google, focuses on recommendations and algorithms used by sites like YouTube, allowing accounts to arrange and promote content to users.

MEET THE 72-YEAR-OLD CONGRESSMAN GOING BACK TO COLLEGE TO LEARN ABOUT AI

Section 230, which allows online platforms significant leeway regarding responsibility for users' speech, has been challenged multiple times in the Supreme Court. (AP Photo/Patrick Semansky, File)

Nohemi Gonzalez, a 23-year-old U.S. citizen studying abroad in France, was killed by ISIS terrorists who fired into a crowded bistro in Paris in 2015. Her family filed suit against Google, arguing that YouTube, which Google owns, aided and abetted the ISIS terrorists by allowing and promoting ISIS material on the platform with algorithms that helped to recruit ISIS radicals.

Marcus Fernandez, an attorney and co-owner of KFB Law, said the outcome of the case could have "far-reaching implications" for tech companies, noting it remains to be seen whether the decision will establish new legal protections for content or if it will open up more avenues for lawsuits against tech companies.

He added that it is important to remember that the ruling could determine the level of protection given to companies and how courts could interpret such protections when it comes to AI-generated content and algorithmic recommendations.

"The decision is likely to be a landmark one, as it will help define what kind of legal liability companies can expect when they use algorithms to target their users with recommendations, as well as what kind of content and recommendations are protected. In addition to this, it will also set precedent for how courts deal with AI-generated content," he said.

According to Section 230 of the Communications Decency Act, tech companies are immune to lawsuits based on content curated or posted by platform users. Much of the discussion from the justices in February waded into whether the posted content was a form of free speech and questioned the extent to which recommendations or algorithms played a role in promoting the content.

AI PAUSE CEDES POWER TO CHINA, HARMS DEVELOPMENT OF DEMOCRATIC AI, EXPERTS WARN SENATE

Artificial Intelligence words are seen in this illustration taken March 31, 2023. REUTERS/Dado Ruvic/Illustration (REUTERS/Dado Ruvic/Illustration)

At one point, the plaintiff's attorney, Eric Schnapper, detailed how YouTube presents thumbnail images and links to various online videos. He argued that while users create the content itself, the thumbnails and links are joint creations of the user and YouTube, thereby exceeding the scope of YouTube's legal protections.

Google attorney Lisa Blatt said the argument was inadmissible because it was not a part of the plaintiff's original complaint filed to the court.

Justice Sonia Sotomayor expressed concern that such a perspective would create a "world of lawsuits." Throughout the proceedings, she remained skeptical that a tech company should be liable for such speech.

Attorney Joshua Lastine, the owner of Lastine Entertainment Law, told Fox News Digital he would be "very surprised" if the justices found some "nexus" between what the algorithms generate and push onto users and other types of online harm, such as somebody telling another person to commit suicide. He said up until that point he does not believe a tech company would face legal repercussions.

Lastine, citing the story of the Hulu drama "The Girl From Plainville," said it is already extremely difficult to establish one-on-one liability and bringing in a third party, like a social media site or tech company, would only increase the difficulty of winning a case.

In 2014, Michelle Carter fell under the national spotlight after it was discovered that she sent text messages to her boyfriend, Conrad Roy III, urging him to kill himself. Though she was charged with involuntary manslaughter and faced up to 20 years in prison, Carter was only sentenced to 15months behind bars.

CLICK HERE TO READ MORE AI COVERAGE FROM FOX NEWS DIGITAL

Google headquarters in Mountain View, California, US, on Monday, Jan. 30, 2023. Alphabet Inc. is expected to release earnings figures on February 2. (Photographer: Marlena Sloss/Bloomberg via Getty Images)

"It was hard enough to find the girl who was sending the text messages liable, let alone the cell phone that was sending those messages," Lastine said. "Once algorithms and computers start telling people to start inflicting harm on other humans, we have bigger problems when machines start doing that."

Ari Lightman, a Distinguished Service Professor at the Carnegie Mellon Heinz College of Information Systems and Policy, told Fox News Digital that a change to Section 230 could open a "Pandora's box" of litigation against tech companies.

"If this opens up the floodgate of lawsuits for people to start suing all of these platforms for harms that have been perpetrated as they perceive toward themthat could really stifle down innovation considerably," he said.

However, Lightman also said the case reaffirmed the importance of consumer protection and noted that if a digital platform can recommend things to users with immunity, they need to design more accurate, usable, and safer products.

Lightman added that what constitutes harm in a particular case against a tech company is very subjective for example, an AI chatbot making someone wait too long or giving erroneous information. According to Lightman, a standard in which lawyers attempt to tie harm to a platform could be "very problematic," leading to a sort of "open season" for lawyers.

"It's going to be litigated and debated for a long period of time," Lightman said.

ALTERNATIVE INVENTOR? BIDEN AMIN OPENS DOOR TO NON-HUMAN, AI PATENT HOLDERS

Lightman noted that AI has many legal issues associated with it, not just liability and erroneous information but also IP issues specific to the content. He said that greater transparency about where the model acquired its data, why it presented such data, and the ability to audit would be an important mechanism for an argument against tech companies' immunity from grievances filed by users unhappy with the AI's output.

Throughout the oral arguments for the case, Schnapper reaffirmed his stance that YouTube's algorithm, which helps to present content to users, is in an of itself a form of speech on the part of YouTube and should therefore be considered separately from content posted by a third party.

Blatt claimed the company was not responsible because all search engines leverage user information to present results. For example, she noted that someone searching for "football" would be provided different results depending on whether they were in the U.S. or somewhere in Europe.

U.S. Deputy Solicitor General Malcolm Stewart compared the conundrum to a hypothetical situation where a bookstore clerk directs a customer to a specific table where a book is located. In this case, Stewart claimed the clerk's suggestion would be speech about the book and would be separate from any speech contained inside the book.

CLICK HERE TO GET THE FOX NEWS APP

The justices are expected to rule on the case by the end of June to determine whether YouTube could be sued over its algorithms used to push video recommendations.

Fox News' Brianna Herlihy contributed to this report.

Read this article:
Landmark Supreme Court case could have 'far reaching implications' for artificial intelligence, experts say - Fox News

Read More..

As artificial intelligence improves, so does concern: What it could mean for Wisconsin – WeAreGreenBay.com

(WFRV) Imagine you are a college student assigned to write an essay. Coming up with a thesis, finding evidence, and ultimately putting it all together is time consuming, but with the help of artificial intelligence services like ChatGPT, an essay could be written in seconds.

ChatGPT is an artificial intelligence software that pulls information from every corner of the internet to give users the most accurate information possible. It also can generate original works of writing from essays to movie scripts.

ChatGPT is just one form of AI, but services like Google, Microsoft, and Snapchat have their versions too.

As you can imagine with any new technology, there are pros and cons that must be addressed. UWGB Provost Kate Burns explains how the university plans to tackle ChatGPT.

Weve been having monthly workshops so that faculty can better understand what does it mean? What does that look like within their classroom? We have policies already in terms of academic honesty within the classroom, when we see plagiarism, how we handle that, but we are really looking to see how can we use it as a tool? Burns says.

Using it as a tool is Kristopher Purzycki, an assistant professor of English and Humanities at the university. As a part of his writing courses, Purzycki shows his students how ChatGPT can be used as a template to begin their essays.

Purzycki says, It does provide a good foundation for writing structure. If were trying to have students write for example a cover letter, that shows off their personality, this is a piece of writing that completely erases that, so it does get them a sense of how they can personalize their writing.

While AI may provide benefits in the classroom, it could also contribute to the spread of misinformation on a global scale.

Operations Management Professor at John Hopkins University Tinglong Dai says, ChatGPT is just one of many generative AI technologies. Other technologies can produce videos, images, illustrations, or even send tweets, Facebook messages, automatically. This can easily become a threat to our democracy.

He also says that people do not need to speak English to use AI technologies.

We have a lot of concerns about rushing misinformation. Any country can unleash massive amounts of information with ChatGPT. They dont have to be very good at English. All they need is to ask a question and ChatGPT really opens that language barrier and makes it insanely cheap to produce misinformation, Dai explains.

Knowing the technology is far from perfect, and at times even dangerous, experts say thats where our old reliable human brains come into play.

Dai says, Eventually, well have to turn to humans to solve the reliability issues and trust issues, the bias issue, all sorts of harmful misinformation.

Purzycki also believes it is important to recognize this technology is not slowing down, so educators should learn to embrace it.

I dont think its worth our time an energy to try to stop it. I think its a great opportunity, but I do think we need to work through some of the big questions that we have.

AI might not be something you can avoid, but just remember to verify everything you see and read.

*Disclaimer: this article was not written using ChatGPT or any other AI software.

See the original post here:
As artificial intelligence improves, so does concern: What it could mean for Wisconsin - WeAreGreenBay.com

Read More..

Artificial Intelligence May Change the SOC Forever – BankInfoSecurity.com

Artificial Intelligence & Machine Learning , Events , Next-Generation Technologies & Secure Development

ChatGPT is "amazing" and "has reformed the way we interact with computing," said Nikesh Arora, chairman and CEO of Palo Alto Networks.

See Also: Live Webinar | Education Cybersecurity Best Practices: Devices, Ransomware, Budgets and Resources

Yes, he said, it can be used to create malware but that malware is blockable because it was created from recursive models. And generative AI can be used to produce phishing attacks at scale, he warned, but we can "fight AI with AI." Arora said the value in generative AI comes from taking what's useful about it and applying that to the SOC.

"The only way security is going to get done right is if you pay attention to data - to what the data is telling you," he said. You can use machine learning to understand patterns, find anomalous behavior and stop it - to "fight bad actors with automation and data analytics and ML," he said.

In this video interview with Information Security Media Group at RSA Conference 2023, Arora also discusses:

Prior to Palo Alto Networks, Arora held a number of positions at Google, including senior vice president and chief business officer, and president of global sales operations and business development. Before that, he was the chief marketing officer for the T-Mobile International Division of Deutsche Telekom.

More:
Artificial Intelligence May Change the SOC Forever - BankInfoSecurity.com

Read More..