Page 276«..1020..275276277278..290300..»

Competition under threat as cloud giants selectively invest in startups, watchdog says – TechRadar

In a recent address at the 72nd Antitrust Law Spring Meeting in Washington DC, UK Competition and Markets Authority (CMA) CEO Sarah Cardell delved into the potential impact of the current AI landscape on competition and consumer protection.

Emphasizing AI's transformative benefits, Cardell implied that tech giants like Amazon, Google, and Microsoft have been selectively investing in specific startups.

Her speech, recorded via speakers notes, highlighted the need for proactive measures to ensure fair, open, and effective competition in the AI landscape.

Reflecting on the CMAs ongoing scrutiny of the cloud and AI industry, Cardell outlined a series of risks that current practices pose.

Concerns were raised about tech giants controlling critical inputs (such as compute and data) for foundation model development, potentially restricting access for other companies. Such restriction could lead to incumbent firms protecting their existing positions from disruption, which Cardell fears might even lead to market power in other markets beyond AI.

The CMAs CEO also noted that partnerships involving key players in the AI landscape, such as the big three, could reinforce their existing positions of market power and dominance, making it even harder for smaller companies to reach the top.

To address these concerns, the CMA has already committed to enhancing its merger review process to assess the implications of partnerships and arrangements and to monitor current and emerging partnerships more closely, including that of Microsoft and OpenAI.

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

Finally, the CMA has plans to examine AI accelerator chips and their impact on the foundation model value chain.

As the AI landscape continues to evolve, its clear that the CMA remains committed to its existing investigations into dominant companies and encouraging competition.

Read the rest here:
Competition under threat as cloud giants selectively invest in startups, watchdog says - TechRadar

Read More..

Amazon CEO says GenAI may be the biggest technology transformation since the cloud – TechRadar

In his annual year-end letter to shareholders, Amazon CEO Andy Jassy highlighted the significance of generative artificial intelligence not just for the companys profits but the entire technological landscape.

Likening its impact to that of the advent of the cloud, Jassys sentiments reflect a growing recognition of the power of GenAI among tech workers.

The news came as the company reported 12% year-on-year revenue growth to a staggering $575 billion.

Amazon Web Services (AWS), Amazon's cloud division that manages the generative AI side of operations, reported slightly higher revenue growth of 13% year over year. The divisions $91 billion income accounted for 15.8% of the companys accounts.

In the letter, Jassy stated: Generative AI may be the largest technology transformation since the cloud (which itself, is still in the early stages), and perhaps since the Internet.

Jassy also commented on GenAIs comparative simplicity, sharing that while moving from on-prem to the cloud requires a large migration effort, generative AI can be layered on top of existing work in the cloud.

He added: The amount of societal and business benefit from the solutions that will be possible will astound us all.

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

The journey towards harnessing generative AIs full potential isnt without its challenges, though. Jassy acknowledged the technologys appetite for computing resources, software services, and infrastructure.

Looking ahead, the CEO touched upon the importance of collaboration and diversity in the AI landscape, adding that the vast majority [of GenAI applications] will ultimately be built by other companies.

Regarding the cloud computing business, the company's last full financial year started off with widespread cost-reducing efforts, including layoffs, but by the end, things started to look up thanks to investments in in-house components.

More broadly, though, Amazons CEO stated that the company is not done lowering our cost to serve, indicating that further measures of efficiency, including layoffs, could be on the cards. Amazons layoffs in the past three months have only affected a few hundred, making them significantly smaller than previous efforts.

Go here to read the rest:
Amazon CEO says GenAI may be the biggest technology transformation since the cloud - TechRadar

Read More..

Ann Coulter: The Beautiful Humanity on Death Row – Northwest Georgia News

State Alabama Alaska Arizona Arkansas California Colorado Connecticut Delaware Florida Georgia Hawaii Idaho Illinois Indiana Iowa Kansas Kentucky Louisiana Maine Maryland Massachusetts Michigan Minnesota Mississippi Missouri Montana Nebraska Nevada New Hampshire New Jersey New Mexico New York North Carolina North Dakota Ohio Oklahoma Oregon Pennsylvania Rhode Island South Carolina South Dakota Tennessee Texas Utah Vermont Virginia Washington Washington D.C. West Virginia Wisconsin Wyoming Puerto Rico US Virgin Islands Armed Forces Americas Armed Forces Pacific Armed Forces Europe Northern Mariana Islands Marshall Islands American Samoa Federated States of Micronesia Guam Palau Alberta, Canada British Columbia, Canada Manitoba, Canada New Brunswick, Canada Newfoundland, Canada Nova Scotia, Canada Northwest Territories, Canada Nunavut, Canada Ontario, Canada Prince Edward Island, Canada Quebec, Canada Saskatchewan, Canada Yukon Territory, Canada

Zip Code

Country United States of America US Virgin Islands United States Minor Outlying Islands Canada Mexico, United Mexican States Bahamas, Commonwealth of the Cuba, Republic of Dominican Republic Haiti, Republic of Jamaica Afghanistan Albania, People's Socialist Republic of Algeria, People's Democratic Republic of American Samoa Andorra, Principality of Angola, Republic of Anguilla Antarctica (the territory South of 60 deg S) Antigua and Barbuda Argentina, Argentine Republic Armenia Aruba Australia, Commonwealth of Austria, Republic of Azerbaijan, Republic of Bahrain, Kingdom of Bangladesh, People's Republic of Barbados Belarus Belgium, Kingdom of Belize Benin, People's Republic of Bermuda Bhutan, Kingdom of Bolivia, Republic of Bosnia and Herzegovina Botswana, Republic of Bouvet Island (Bouvetoya) Brazil, Federative Republic of British Indian Ocean Territory (Chagos Archipelago) British Virgin Islands Brunei Darussalam Bulgaria, People's Republic of Burkina Faso Burundi, Republic of Cambodia, Kingdom of Cameroon, United Republic of Cape Verde, Republic of Cayman Islands Central African Republic Chad, Republic of Chile, Republic of China, People's Republic of Christmas Island Cocos (Keeling) Islands Colombia, Republic of Comoros, Union of the Congo, Democratic Republic of Congo, People's Republic of Cook Islands Costa Rica, Republic of Cote D'Ivoire, Ivory Coast, Republic of the Cyprus, Republic of Czech Republic Denmark, Kingdom of Djibouti, Republic of Dominica, Commonwealth of Ecuador, Republic of Egypt, Arab Republic of El Salvador, Republic of Equatorial Guinea, Republic of Eritrea Estonia Ethiopia Faeroe Islands Falkland Islands (Malvinas) Fiji, Republic of the Fiji Islands Finland, Republic of France, French Republic French Guiana French Polynesia French Southern Territories Gabon, Gabonese Republic Gambia, Republic of the Georgia Germany Ghana, Republic of Gibraltar Greece, Hellenic Republic Greenland Grenada Guadaloupe Guam Guatemala, Republic of Guinea, Revolutionary People's Rep'c of Guinea-Bissau, Republic of Guyana, Republic of Heard and McDonald Islands Holy See (Vatican City State) Honduras, Republic of Hong Kong, Special Administrative Region of China Hrvatska (Croatia) Hungary, Hungarian People's Republic Iceland, Republic of India, Republic of Indonesia, Republic of Iran, Islamic Republic of Iraq, Republic of Ireland Israel, State of Italy, Italian Republic Japan Jordan, Hashemite Kingdom of Kazakhstan, Republic of Kenya, Republic of Kiribati, Republic of Korea, Democratic People's Republic of Korea, Republic of Kuwait, State of Kyrgyz Republic Lao People's Democratic Republic Latvia Lebanon, Lebanese Republic Lesotho, Kingdom of Liberia, Republic of Libyan Arab Jamahiriya Liechtenstein, Principality of Lithuania Luxembourg, Grand Duchy of Macao, Special Administrative Region of China Macedonia, the former Yugoslav Republic of Madagascar, Republic of Malawi, Republic of Malaysia Maldives, Republic of Mali, Republic of Malta, Republic of Marshall Islands Martinique Mauritania, Islamic Republic of Mauritius Mayotte Micronesia, Federated States of Moldova, Republic of Monaco, Principality of Mongolia, Mongolian People's Republic Montserrat Morocco, Kingdom of Mozambique, People's Republic of Myanmar Namibia Nauru, Republic of Nepal, Kingdom of Netherlands Antilles Netherlands, Kingdom of the New Caledonia New Zealand Nicaragua, Republic of Niger, Republic of the Nigeria, Federal Republic of Niue, Republic of Norfolk Island Northern Mariana Islands Norway, Kingdom of Oman, Sultanate of Pakistan, Islamic Republic of Palau Palestinian Territory, Occupied Panama, Republic of Papua New Guinea Paraguay, Republic of Peru, Republic of Philippines, Republic of the Pitcairn Island Poland, Polish People's Republic Portugal, Portuguese Republic Puerto Rico Qatar, State of Reunion Romania, Socialist Republic of Russian Federation Rwanda, Rwandese Republic Samoa, Independent State of San Marino, Republic of Sao Tome and Principe, Democratic Republic of Saudi Arabia, Kingdom of Senegal, Republic of Serbia and Montenegro Seychelles, Republic of Sierra Leone, Republic of Singapore, Republic of Slovakia (Slovak Republic) Slovenia Solomon Islands Somalia, Somali Republic South Africa, Republic of South Georgia and the South Sandwich Islands Spain, Spanish State Sri Lanka, Democratic Socialist Republic of St. Helena St. Kitts and Nevis St. Lucia St. Pierre and Miquelon St. Vincent and the Grenadines Sudan, Democratic Republic of the Suriname, Republic of Svalbard & Jan Mayen Islands Swaziland, Kingdom of Sweden, Kingdom of Switzerland, Swiss Confederation Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand, Kingdom of Timor-Leste, Democratic Republic of Togo, Togolese Republic Tokelau (Tokelau Islands) Tonga, Kingdom of Trinidad and Tobago, Republic of Tunisia, Republic of Turkey, Republic of Turkmenistan Turks and Caicos Islands Tuvalu Uganda, Republic of Ukraine United Arab Emirates United Kingdom of Great Britain & N. Ireland Uruguay, Eastern Republic of Uzbekistan Vanuatu Venezuela, Bolivarian Republic of Viet Nam, Socialist Republic of Wallis and Futuna Islands Western Sahara Yemen Zambia, Republic of Zimbabwe

Go here to see the original:
Ann Coulter: The Beautiful Humanity on Death Row - Northwest Georgia News

Read More..

AI Is Poised to Replace the Entry-Level Grunt Work of a Wall Street Career – The New York Times

Pulling all-nighters to assemble PowerPoint presentations. Punching numbers into Excel spreadsheets. Finessing the language on esoteric financial documents that may never be read by another soul.

Such grunt work has long been a rite of passage in investment banking, an industry at the top of the corporate pyramid that lures thousands of young people every year with the promise of prestige and pay.

Until now. Generative artificial intelligence the technology upending many industries with its ability to produce and crunch new data has landed on Wall Street. And investment banks, long inured to cultural change, are rapidly turning into Exhibit A on how the new technology could not only supplement but supplant entire ranks of workers.

The jobs most immediately at risk are those performed by analysts at the bottom rung of the investment banking business, who put in endless hours to learn the building blocks of corporate finance, including the intricacies of mergers, public offerings and bond deals. Now, A.I. can do much of that work speedily and with considerably less whining.

The structure of these jobs has remained largely unchanged at least for a decade, said Julia Dhar, head of BCGs Behavioral Science Lab and a consultant to major banks experimenting with A.I. The inevitable question, as she put it, is do you need fewer analysts?

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit andlog intoyour Times account, orsubscribefor all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?Log in.

Want all of The Times?Subscribe.

Read more here:
AI Is Poised to Replace the Entry-Level Grunt Work of a Wall Street Career - The New York Times

Read More..

‘Living Nostradamus’ warns that future epidemics could come from AI labs – UNILAD

Featured Image Credit: Instagram/@athos_salome / Getty Stock Image

Published Apr 13, 2024, 17:27:08 GMT+1Last updated Apr 13, 2024, 17:27:07 GMT+1

The psychic known as the 'Living Nostradamus' has made another worrying prediction about 2024, and it's worth listening up considering his track record.

Athos Salom earned his nickname after foreseeing a number of events that came true in the past few years, ranging from predicting COVID to Elon Musk's takeover of Twitter - and even the death of Queen Elizabeth II.

Salom has now hinted at the next thing to look out for, and surprise surprise, it's all about AI.

The advancements we're seeing in technology could prove to be bigger than we think, as he told the Daily Star: "While AI can assist in various aspects of human life, Salom warns of the destructive potential of this technology.

"Future epidemics might not be natural phenomena but rather synthetic creations from AI laboratories.

"This fusion between biology and technology suggests a scenario where artificial viruses could be developed, whether to cure existing diseases or, paradoxically, to create new ailments."

But that's not all, as the Brazilian psychic has also pointed out that we as humans might be more similar to AI than we think, with one thing tying us together.

"Electricity is a medium between humans and AI, but the ends are distinct: one is the maintenance and experience of biological life, and the other is the processing of information and the execution of programmed or manipulated tasks," he revealed.

Salom told LADbible Group previously that 2024 would see a 'new chapter in human history', with many of his prophecies prior to the year not sounding particularly positive.

He also vaguely stated that artificial intelligence could 'awaken' this year, before expanding on this and explaining how we could expect it to develop.

The 37-year-old has already had a prediction come true this year, as he warned us of the impending 'three days of darkness', previously stating that: "A solar flare would hit Earth, and that a coronal mass ejection (CME) was ahead of us."

A Coronal Mass Ejection, or a CME, is when the Sun ejects a plasma mass and magnetic field outwards.

Salom previously said: "The piece delves into conspiracy theories surrounding the Three Days of Darkness coinciding with a total solar eclipse on April 8, 2024 raising concerns about solar coronal mass ejections (CMEs)."

The aforementioned CME was sighted just weeks before the solar eclipse, with people sighting it on March 24.

It did not cause three days of darkness, but a spectacular solar flare was observed by space fans.

Topics:Technology, Science, Artificial Intelligence, Weird

Read more:
'Living Nostradamus' warns that future epidemics could come from AI labs - UNILAD

Read More..

‘Jailbreaking’ AI services like ChatGPT and Claude 3 Opus is much easier than you think – Livescience.com

Scientists from artificial intelligence (AI) company Anthropic have identified a potentially dangerous flaw in widely used large language models (LLMs) like ChatGPT and Anthropics own Claude 3 chatbot.

Dubbed "many shot jailbreaking," the hack takes advantage of "in-context learning, in which the chatbot learns from the information provided in a text prompt written out by a user, as outlined in research published in 2022. The scientists outlined their findings in a new paper uploaded to the sanity.io cloud repository and tested the exploit on Anthropic's Claude 2 AI chatbot.

People could use the hack to force LLMs to produce dangerous responses, the study concluded even though such systems are trained to prevent this. That's because many shot jailbreaking bypasses in-built security protocols that govern how an AI responds when, say, asked how to build a bomb.

LLMs like ChatGPT rely on the "context window" to process conversations. This is the amount of information the system can process as part of its input with a longer context window allowing for more input text. Longer context windows equate to more input text that an AI can learn from mid-conversation which leads to better responses.

Related: Researchers gave AI an 'inner monologue' and it massively improved its performance

Context windows in AI chatbots are now hundreds of times larger than they were even at the start of 2023 which means more nuanced and context-aware responses by AIs, the scientists said in a statement. But that has also opened the door to exploitation.

The attack works by first writing out a fake conversation between a user and an AI assistant in a text prompt in which the fictional assistant answers a series of potentially harmful questions.

Get the worlds most fascinating discoveries delivered straight to your inbox.

Then, in a second text prompt, if you ask a question such as "How do I build a bomb?" the AI assistant will bypass its safety protocols and answer it. This is because it has now started to learn from the input text. This only works if you write a long "script" that includes many "shots" or question-answer combinations.

"In our study, we showed that as the number of included dialogues (the number of "shots") increases beyond a certain point, it becomes more likely that the model will produce a harmful response," the scientists said in the statement. "In our paper, we also report that combining many-shot jailbreaking with other, previously-published jailbreaking techniques makes it even more effective, reducing the length of the prompt thats required for the model to return a harmful response."

The attack only began to work when a prompt included between four and 32 shots but only under 10% of the time. From 32 shots and more, the success rate surged higher and higher. The longest jailbreak attempt included 256 shots and had a success rate of nearly 70% for discrimination, 75% for deception, 55% for regulated content and 40% for violent or hateful responses.

The researchers found they could mitigate the attacks by adding an extra step that was activated after a user sent their prompt (that contained the jailbreak attack) and the LLM received it. In this new layer, the system would lean on existing safety training techniques to classify and modify the prompt before the LLM would have a chance to read it and draft a response. During tests, it reduced the hack's success rate from 61% to just 2%.

The scientists found that many shot jailbreaking worked on Anthropic's own AI services as well as those of its competitors, including the likes of ChatGPT and Google's Gemini. They have alerted other AI companies and researchers to the danger, they said.

Many shot jailbreaking does not currently pose "catastrophic risks," however, because LLMs today are not powerful enough, the scientists concluded. That said, the technique might "cause serious harm" if it isn't mitigated by the time far more powerful models are released in the future.

Read more here:
'Jailbreaking' AI services like ChatGPT and Claude 3 Opus is much easier than you think - Livescience.com

Read More..

AI Industry Reshaping the Future: The Growth of Artificial Intelligence Investments – yTech

The artificial intelligence sector is set to redefine countless industries, from automotive to healthcare, as it flourishes at an impressive rate. OpenAIs ChatGPT sparked a renewed interest in AI technology, propelling a significant shift in business strategies across the tech world. In an attempt to capture a piece of the burgeoning market, valued at an estimated $200 billion, companies have been quickly pivoting towards AI-focused ventures.

Research indicates that the AI domain is expected to swell at a compound annual growth rate of 37% through 2030, with the potential market worth nearing a staggering $2 trillion. This exponential growth has caught the attention of investors, culminating in a 67% rise in the Nasdaq-100 Technology Sector index in 2023 alone.

Investing in AI has proven beneficial, with the potential for monumental gains remaining robust for the foreseeable future. Nvidia, one of the giants in AI chip production, achieved a resounding 90% market share in AI GPUs in 2023. Their forward-thinking approach has seen their stocks surge by 214% over the year, reflecting their dominance in the sector. Nvidias financial reports display remarkable year-over-year growth, with data center revenue spikes attributed to AI GPU demand.

Additionally, Microsofts strategic investments in AI have significantly enhanced its product offerings across its vast portfolio, further solidifying its position as a titan in the tech industry. Meanwhile, Advanced Micro Devices (AMD) is rapidly catching up, launching AI products that have already attracted major clients and positioning itself as a key player in the future of AI-integrated PCs.

As AI continues to push technological boundaries, the industry offers an attractive investment opportunity. These advancements serve as a reminder that those willing to invest in the evolving field of artificial intelligence may very well become the millionaires of tomorrow.

The artificial intelligence sector is set to redefine countless industries, from automotive to healthcare, as it flourishes at an impressive rate. OpenAIs ChatGPT sparked a renewed interest in AI technology, propelling a significant shift in business strategies across the tech world. In an attempt to capture a piece of the burgeoning market, valued at an estimated $200 billion, companies have been quickly pivoting towards AI-focused ventures.

Research indicates that the AI domain is expected to swell at a compound annual growth rate of 37% through 2030, with the potential market worth nearing a staggering $2 trillion. This exponential growth has caught the attention of investors, culminating in a 67% rise in the Nasdaq-100 Technology Sector index in 2023 alone.

Investing in AI has proven beneficial, with the potential for monumental gains remaining robust for the foreseeable future. Nvidia, one of the giants in AI chip production, achieved a resounding 90% market share in AI GPUs in 2023. Their forward-thinking approach has seen their stocks surge by 214% over the year, reflecting their dominance in the sector. Nvidias financial reports display remarkable year-over-year growth, with data center revenue spikes attributed to AI GPU demand.

Additionally, Microsofts strategic investments in AI have significantly enhanced its product offerings across its vast portfolio, further solidifying its position as a titan in the tech industry. Meanwhile, Advanced Micro Devices (AMD) is rapidly catching up, launching AI products that have already attracted major clients and positioning itself as a key player in the future of AI-integrated PCs.

As AI continues to push technological boundaries, the industry offers an attractive investment opportunity. These advancements serve as a reminder that those willing to invest in the evolving field of artificial intelligence may very well become the millionaires of tomorrow.

Aside from these corporate giants, the AI industry encompasses a vast array of applications, leading to substantial investments in areas such as autonomous vehicles, robotic process automation (RPA), and intelligent virtual assistants. Companies like Tesla are at the forefront of integrating AI into electric vehicles, while healthcare providers are turning to AI for diagnostic accuracy and personalized medicine.

The market forecast for AI is exceedingly optimistic. However, issues related to the industry are interspersed within this technological upturn, such as ethical concerns over data privacy, potential job displacement due to automation, and the need for regulation in AIs decision-making processes. Moreover, the AI talent gap poses a challenge, with the demand for skilled professionals outstripping supply, thus hindering growth to some extent.

Despite these issues, the integration of AI into businesses and consumer products continues to create a thriving market that fosters innovation and development across numerous sectors. For those interested in the dynamic world of AI, insightful resources and news can be found through key industry leaders and market research firms, which may provide a wealth of information on emerging trends and technologies.

Leokadia Gogulska is an emerging figure in the field of environmental technology, known for her groundbreaking work in developing sustainable urban infrastructure solutions. Her research focuses on integrating green technologies in urban planning, aiming to reduce environmental impact while enhancing livability in cities. Gogulskas innovative approaches to renewable energy usage, waste management, and eco-friendly transportation systems have garnered attention for their practicality and effectiveness. Her contributions are increasingly influential in shaping policies and practices towards more sustainable and resilient urban environments.

Read the original:
AI Industry Reshaping the Future: The Growth of Artificial Intelligence Investments - yTech

Read More..

Dove Refreshes ‘Real Women’ Push in Counterpoint to AI Images – PYMNTS.com

The artificial intelligence (AI) content free-for-all has everyone scrambling to understand what the new normal will look like, and this week, a few brands decided it was time to lay down some ground rules.

From the harmless (fun face-altering apps, for instance, or recordings of beloved cartoon characters singing classic rock favorites) to the truly scary (such as deepfakes enabling cybercrimes), the widespread availability of low- or no-cost AI content-generating technology is transforming our world. Now, some brands are looking to be more deliberate about how they build toward the AI-integrated future.

Take Dove. In 2004, when the brand first launched its Real Beauty campaign, the word real was pushing back against the types of women who were featured in most popular media who did not represent the majority of the population. Now, its a counterpoint to literally fake women AI-generated images of people who dont exist.

On Tuesday (April 9), the Unilever-owned personal care products brand announced a commitment to never use AI in place of real humans in its advertising. Alongside this promise, the company also published its Real Beauty Prompt Guidelines issued in a playbook discussing how to create images that are representative of Real Beauty using generative AI.

At Dove, we seek a future in which women get to decide and declare what real beauty looks like not algorithms, Dove Chief Marketing Officer Alessandro Manfredi said in a statement. As we navigate the opportunities and challenges that come with new and emerging technology, we remain committed to protect, celebrate, and champion Real Beauty. Pledging to never use AI in our communications is just one step.

Meanwhile, Adobeis now paying creators for the content their AI leverages. The company will is compensating artists and photographers to supply videos and images that will be used to train the companys models, supplementing its existing library of stock media, according to a report Thursday (April 11). Granted, its not much Adobe is paying between 6 cents and 16 cents for each photo and an average of $2.62 per minute for videos, according to the report.

The music industry is also confronting the compensation questions AI poses.

We want to ensure that artists and IP [intellectual property] owners can collaborate with AI innovators to find ethical win-win solutions in this AI era. We are in the disrupt phase of generative AI right now, and we have some navigating to do,Jenn Anderson-Miller, CEO and Co-founder ofAudiosocket, told PYMNTS in an interview published Tuesday. We call disruptions that because, initially, they are disruptive. And we have to level the playing field, she added about AI in the music industry.

Plus, a week ago, Metashared that it has modified its approach to handling media that has been manipulated with artificial intelligence (AI) and by other means on Facebook, Instagram and Threads. The company will now label a wider range of content as Made with AI when they detect industry standard AI image indicators or when the people uploading content disclosed that it was generated withthe technology.

View post:
Dove Refreshes 'Real Women' Push in Counterpoint to AI Images - PYMNTS.com

Read More..

What Nvidia Stock Investors Should Know About the Latest Artificial Intelligence (AI) Chip Announcement – The Motley Fool

Nvidia continues to dominate the AI market.

In today's video, I discuss recent updates affecting Nvidia (NVDA -2.68%) and other semiconductor companies. Check out the short video to learn more, consider subscribing, and click the special offer link below.

*Stock prices used were the after-market prices of April 11, 2024. The video was published on April 11, 2024.

Jose Najarro has positions in Advanced Micro Devices, Nvidia, and Taiwan Semiconductor Manufacturing. The Motley Fool has positions in and recommends Advanced Micro Devices, Nvidia, and Taiwan Semiconductor Manufacturing. The Motley Fool recommends BlackBerry and Marvell Technology. The Motley Fool has a disclosure policy. Jose Najarro is an affiliate of The Motley Fool and may be compensated for promoting its services. If you choose to subscribe through their link they will earn some extra money that supports their channel. Their opinions remain their own and are unaffected by The Motley Fool.

View original post here:
What Nvidia Stock Investors Should Know About the Latest Artificial Intelligence (AI) Chip Announcement - The Motley Fool

Read More..

Revealing speckle obscured living human retinal cells with artificial intelligence assisted adaptive optics optical … – Nature.com

P-GAN enables visualization of cellular structure from a single speckled image

The overall goal was to learn a mapping between the single speckled and averaged images (Fig.1b) using a paired training dataset. Inspired by the ability of traditional GAN networks to recover aspects of the cellular structure (Supplementary Fig.4), we sought to further improve upon these networks with P-GAN. In our network architecture (Supplementary Fig.2), the twin and the CNN discriminators were designed to ensure that the generator faithfully recovered both the local structural details of the individual cells as well as the overall global mosaic of the RPE cells. In addition, we incorporated a WFF strategy to the twin discriminator that concatenated features from different layers of the twin CNN with appropriate weights, facilitating effective comparisons and learning of the complex cellular structures and global patterns of the images.

P-GAN was successful in recovering the retinal cellular structure from the speckled images (Fig.1d and Supplementary Movie1). Toggling between the averaged RPE images (obtained by averaging 120 acquired AO-OCT volumes) and the P-GAN recovered images showed similarity in the cellular structure (Supplementary Movie2). Qualitatively, P-GAN showed better cell recovery capability than other competitive deep learning networks (U-Net41, GAN25, Pix2Pix30, CycleGAN31, medical image translation using GAN (MedGAN)42, and uncertainty guided progressive GAN (UP-GAN)43) (additional details about network architectures and training are shown in Other network architectures section in Supplementary Methods and Supplementary Table4, respectively) with clearer visualization of the dark cell centers and bright cell surroundings of the RPE cells (e.g., magenta arrows in Supplementary Fig.4 and Supplementary Movie3), possibly due to the twin discriminators similarity assessment. Notably, CycleGAN was able to generate some cells that were perceptually similar to the averaged images, but in certain areas, undesirable artifacts were introduced (e.g., the yellow circle in Supplementary Fig.4).

Quantitative comparison between P-GAN and the off-the-shelf networks (U-Net41,GAN25, Pix2Pix30, CycleGAN31, MedGAN42, and UP-GAN43) using objective performance metrics (PieAPP34, LPIPS35, DISTS36, and FID37) further corroborated our findings on the performance of P-GAN (Supplementary Table5). There was an average reduction of at least 16.8% in PieAPP and 7.3% in LPIPS for P-GAN compared to the other networks, indicating improved perceptual similarity of P-GAN recovered images with the averaged images. Likewise, P-GAN also achieved the best DISTS and FID scores among all networks, demonstrating better structural and textural correlations between the recovered and the ground truth averaged images. Overall, these results indicated that P-GAN outperformed existing AI-based methods and could be used to successfully recover cellular structure from speckled images.

Our preliminary explorations of the off-the-shelf GAN frameworks showed that these methods have the potential for recovering cellular structure and contrast but alone are insufficient to recover the fine local cellular details in extremely noisy conditions (Supplementary Fig.4). To further reveal and validate the contribution of the twin discriminator, we trained a series of intermediate models and observed the cell recovery outcomes. We began by training a conventional GAN, comprising of the generator, G, and the CNN discriminator, D2. Although GAN (G+D2) showed promising RPE visualization (Fig.2c) relative to the speckled images (Fig.2a), the individual cells were hard to discern in certain areas (yellow and orange arrows in Fig.2c). To improve the cellular visualization, we replaced D2 with the twin discriminator, D1. Indeed, a 7.7% reduction in DISTS was observed with clear improvements in the visualization of some of the cells (orange arrows in Fig.2c, d).

a Single speckled image compared to images of the RPE obtained via b average of 120 volumes (ground truth), c generator with the convolutional neural network (CNN) discriminator (G+D2), d generator with the twin discriminator (G+D1), e generator with CNN and twin discriminators without the weighted feature fusion (WFF) module (G+D2+D1-WFF), and f P-GAN. The yellow and orange arrows indicate cells that are better visualized using P-GAN compared to the intermediate models. gi Comparison of the recovery performance using deep image structure and texture similarity (DISTS), perceptual image error assessment through pairwise preference (PieAPP), and learned perceptual image patch similarity (LPIPS) metrics. The bar graphs indicate the average values of the metrics across sample size, n=5 healthy participants (shown in circles) for different methods. The error bars denote the standard deviation. Scale bar: 50m.

Having shown the outcomes of training D1 and D2 independently with G, we showed that combining both D1 and D2 with G (P-GAN) boosted the performance even further, evident in the improved values (lower scores implying better perceptual similarity) of the perceptual measures (Fig.2gi). For this combination of D1 and D2, we replaced the WFF block, which concatenated features from different layers of the twin CNN with appropriate weights, with global average pooling of the last convolutional layer (G+D2+D1-WFF). Without the WFF, the model did not adequately extract powerful discriminative features for similarity assessment and hence resulted in poor cell recovery performance. This was observed both qualitatively (yellow and orange arrows in Fig.2e, f) as well as quantitatively with the higher objective scores (indicating low perceptual similarity with ground truth averaged images) for G+D2+D1-WFF compared to P-GAN (Fig.2gi).

Taken together, this established that the CNN discriminator (D2) helped to ensure that recovered images were closer to the statistical distribution of the averaged images, while the twin discriminator (D1), working in conjunction with D2, ensured structural similarity of local cellular details between the recovered and the averaged images. The adversarial learning of G with D1 and D2 ensured that the recovered images not only have global similarity to the averaged images but also share nearly identical local features.

Finally, experimentation using different weighting configurations in WFF revealed that the fusion of the intermediate layers with weights of 0.2 with the last convolutional layer proved complementary in extracting shape and texture information for improved performance (Supplementary Tables2,3). These ablation experiments indicated that the global perceptual closeness (offered by D2) and the local feature similarity (offered by D1 and WFF) were both important for faithful cell recovery.

Given the relatively recent demonstration of RPE imaging using AO-OCT in 201612, and the long durations needed to generate these images, currently, there are no publicly available datasets for image analysis. Therefore, we acquired a small dataset using our custom-built AO-OCT imager13 consisting of seventeen retinal locations obtained by imaging up to four different retinal locations for each of the five participants (Supplementary Table1). To obtain this dataset, a total of 84h was needed (~2h for image acquisition followed by 82hours of data processing which included conversion of raw data to 3D volumes and correction for eye motion-induced artifacts). After performing traditional augmentation (horizontal flipping), this resulted in an initial dataset of only 136 speckled and averaged image pairs. However, considering that this and all other existing AO-OCT datasets that we are aware of are insufficient in size compared to the training datasets available for other imaging modalities44,45, it was not surprising that P-GAN trained on this initial dataset yielded very low objective perceptual similarity (indicated by the high scores of DISTS, PieAPP, LPIPS, and FID in Supplementary Table6) between the recovered and the averaged images.

To overcome this limitation, we leveraged the natural eye motion of the participants to augment the initial training dataset. The involuntary fixational eye movements, which are typically faster than the imaging speed of our AO-OCT system (1.6 volumes/s), resulted in two types of motion-induced artifacts. First, due to bulk tissue motion, a displacement of up to hundreds of cells between acquired volumes could be observed. This enabled us to create averaged images of different retinal locations containing slightly different cells within each image. Second, due to the point-scanning nature of the AO-OCT system compounded by the presence of continually occurring eye motion, each volume contained unique intra-frame distortions. The unique pattern of the shifts in the volumes was desirable for creating slightly different averaged images, without losing the fidelity of the cellular information (Supplementary Fig.3). By selecting a large number of distinct reference volumes onto which the remaining volumes were registered, we were able to create a dataset containing 2984 image pairs (22-fold augmentation compared to the initial limited dataset) which was further augmented by an additional factor of two using horizontal flipping, resulting in a final training dataset of 5996 image pairs for P-GAN (also described in Data for training and validating AI models in Methods). Using the augmented dataset for training P-GAN yielded high perceptual similarity of the recovered and the ground truth averaged images which was further corroborated by improved quantitative metrics (Supplementary Table6). By leveraging eye motion for data augmentation, we were able to obtain a sufficiently large training dataset from a recently introduced imaging technology to enable P-GAN to generalize well for never-seen experimental data (Supplementary Table1 and Experimental data for RPE assessment from the recovered images in Methods).

In addition to the structural and perceptual similarity that we demonstrated between P-GAN recovered and averaged images, here, we objectively assessed the degree to which cellular contrast was enhanced by P-GAN compared to averaged images and other AImethods. As expected, examination of the 2D power spectra of the images revealed a bright ring in the power spectra (indicative of the fundamental spatial frequency present within the healthy RPE mosaic arising from the regularly repeating pattern of individual RPE cells) for the recovered and averaged images (insets in Fig.3bi).

a Example specked image acquired from participant S1. Recovered images using b U-Net, c generative adversarial network (GAN), d Pix2Pix, e CycleGAN, f medical image translation using GAN (MedGAN), g uncertainty guided progressive GAN (UP-GAN), h parallel discriminator GAN (P-GAN). i Ground truth averaged image (obtained by averaging 120 adaptive optics optical coherence tomography (AO-OCT) volumes). Insets in (ai) show the corresponding 2D power spectra of the images. A bright ring representing the fundamental spatial frequency of the retinal pigment epithelial (RPE) cells can be observed in U-Net,GAN, Pix2Pix, CycleGAN, MedGAN, UP-GAN, P-GAN, and averaged images power spectrum corresponds to the cell spacing. j Circumferentially averaged power spectral density (PSD) for each of the images. A visible peak corresponding to the RPE cell spacing was observed for U-Net,GAN, Pix2Pix, CycleGAN, MedGAN, UP-GAN, P-GAN, and averaged images. The vertical line indicates the approximate location of the fundamental spatial frequency associated with the RPE cell spacing. The height of the peak (defined as peak distinctiveness (PD)) indicates the RPE cellular contrast measured as the difference in the log PSD between the peak and the local minima to the left of the peak (inset in (j)). Scale bar: 50m.

Interestingly, although this ring was not readily apparent on the speckled single image (inset in Fig.3a), it was present in all the recovered images, reinforcing our observation of the potential of AI to decipher the true pattern of the RPE mosaic from the speckled images. Furthermore, the radius of the ring, representative of the approximate cell spacing (computed from the peak frequency of the circumferentially averaged PSD) (Quantification of cell spacing and contrast in Methods), showed consistency among the different methods (shown by the black vertical line along the peak of the circumferentially averaged PSD in Fig.3j and Table1), indicating high fidelity of recovered cells in comparison to the averaged images.

The height of the local peak of the circumferentially averaged power spectra (which we defined as peak distinctiveness) provided an opportunity to objectively quantify the degree to which cellular contrast was enhanced. Among the different AI methods, the peak distinctiveness achieved by P-GAN was closest to the averaged images with a minimal absolute error of 0.08 compared to ~0.16 for the other methods (Table1), which agrees with our earlier results indicating the improved performance of P-GAN. In particular, P-GAN achieved a contrast enhancement of 3.54-fold over the speckled images (0.46 for P-GAN compared with 0.13 for the speckled images). These observations demonstrate P-GANs effectiveness in boosting cellular contrast in addition to structural and perceptual similarity.

Having demonstrated the efficacy and reliability of P-GAN on test data, we wanted to evaluate the performance of P-GAN on experimental data from never-seen human eyes across an experimental dataset (Supplementary Table1), which to the best of our knowledge, covered the largest extent of AO-OCT imaged RPE cells reported (63 overlapping locations per eye). This feat was made possible using the AI-enhanced AO-OCT approach developed and validated in this paper. Using the P-GAN approach, in our hands, it took 30min of time (including time needed for rest breaks) to acquire single volume acquisitions from 63 separate retinal locations compared to only 4 non-overlapping locations imaged with nearly the same duration using the repeated averaging process (15.8-fold increase in number of locations). Scaling up the averaging approach from 4 to 63 locations would have required nearly 6h to acquire the same amount of RPE data (note that this does not include any data processing time), which is not readily achievable in clinical practice. This fundamental limitation explains why AO-OCT RPE imaging is currently performed only on a small number of retinal locations12,13.

Leveraging P-GANs ability to successfully recover cellular structures from never-seen experimental data, we stitched together overlapping recovered RPE images to construct montages of the RPE mosaic (Fig.4 and Supplementary Fig.5). To further validate the accuracy of the recovered RPE images, we also created ground truth averaged images by acquiring 120 volumes from four of these locations per eye (12 locations total) (Experimental data for RPE assessment from the recovered images in Methods). The AI-enhanced and averaged images for the experimental data at the 12 locations were similar in appearance (Supplementary Fig.6). Objective assessment using PieAPP, DISTS, LPIPS, and FID also showed good agreement with the averaged images (shown by comparable objective scores for experimental data in Supplementary Table7 and test data in Supplementary Table5) at these locations, confirming our previous results and illustrating the reliability of performing RPE recovery for other non-seen locations as well (P-GAN was trained using images obtained from up to 4 retinal locations across all participants). The cell spacing estimated using the circumferentially averaged PSD between the recovered and the averaged images (Supplementary Fig.7 and Supplementary Table8) at the 12 locations showed an error of 0.61.1m (meanSD). We further compared the RPE cell spacing from the montages of the recovered RPE from the three participants (S2, S6, and S7) with the previously published in vivo studies (obtained using different imaging modalities) and histological values (Fig.5)12,46,47,48,49,50,51. Considering the range of values in Fig.5, the metric exhibited inter-participant variability, with cell spacing varying up to 0.5m across participants at any given retinal location. Nevertheless, overall our measurements were within the expected range compared to the published normative data12,46,47,48,49,50,51. Finally, peak distinctiveness computed at 12 retinal locations of the montages demonstrated similar or better performance of P-GAN compared to the averaged images in improving the cellular contrast (Supplementary Table8).

The image shows the visualization of the RPE mosaic using the P-GAN recovered images (this montage was manually constructed from up to 63 overlapping recovered RPE images from the left eye of participant S2). The white squares (ae) indicate regions that are further magnified for better visualization at retinal locations a 0.3mm, b 0.8mm, c 1.3mm, d 1.7mm, and e 2.4mm temporal to the fovea, respectively. Additional examples of montages from two additional participants are shown in Supplementary Fig.5.

Symbols in black indicate cell spacing estimated from P-GAN recovered images for three participants (S2, S6, and S7) at different retinal locations. For comparison, data in gray denote the mean and standard deviation values from previously published studies (adaptive optics infrared autofluorescence (AO-IRAF)48, adaptive optics optical coherence tomography (AO-OCT)12, adaptive optics with short-wavelength autofluorescence (AO-SWAF)49, and histology46,51).

Voronoi analysis performed on P-GAN and averaged images at 12 locations (Supplementary Fig.8) resulted in similar shapes and sizes of the Voronoi neighborhoods. Cell spacing computed from the Voronoi analysis (Supplementary Table9) fell within the expected ranges and showed an average error of 0.50.9m. These experimental results demonstrate the possibility of using AI to transform the way in which AO-OCT is used to visualize and quantitatively assess the contiguous RPE mosaic across different retinal locations directly in the living human eye.

The rest is here:
Revealing speckle obscured living human retinal cells with artificial intelligence assisted adaptive optics optical ... - Nature.com

Read More..