Page 236«..1020..235236237238..250260..»

What’s The Difference Between Artificial Intelligence And Someone With An Ivy League Education? – The Daily Wire

You know, many people have said to me: Hot Gandalf, why is it that in spite of your deep insight and your smoldering good looks, youve never really covered the subject of artificial intelligence? And usually Ive responded by simply checking their fake ID to make sure theyre pretending to be over eighteen and then inviting them back to my hotel room.

But the truth is, I havent talked about this subject a lot because up until recently I thought artificial intelligence was just a way of describing someone with an Ivy League education. But now, my team of crack researchers have stopped researching crack and discovered that, no, in fact artificial intelligence is some sort of computer gizmo that can imitate human intelligence so successfully it can deliver completely self-certain answers to complex questions while possessing no actual information or wisdom whatsoever exactly AS IF it had an Ivy League education.

Now many people fear that A.I. could become so powerful it will endanger mankind. Luckily, billionaire Elon Musk has a plan to protect our species by melding human intelligence with computers and then installing the resulting hybrid in a humanoid robot which will travel back in time to assassinate the mother of a resistance leader so that machines can take over the planet. Frankly, that doesnt sound like such a great plan to me, but what did you expect from a guy who changed the name of Twitter to X so no one knows what to call a tweet anymore?

So far, however, the problems created by A.I. have been on a smaller scale. For instance, A.I. has made it possible for you to take revenge on a girl who refused to go out with you by inserting her into a deep fake pornographic video, which is absolutely despicable, although the videos are amazing, and really its no wonder a girl that hot wouldnt go out with a lowlife shmuck like you.

Also, its now much harder for websites to test whether youre an A.I. bot or just a human being with an Ivy League education. Youll remember how websites used to put up a picture and ask you to click on all the images of traffic lights, then when you did that, it would put up another picture and ask you to click on all the cars, and when you did that it would put up another picture and you would give up and just watch porn videos of the girl who wouldnt go out with you?

Well, now, websites have been forced to develop much more intricate tests to find out whether or not youre a human being. For example, one site will not let you sign on until you do something that only a human being would do, like sleep with yet another guy on the first date and then pay a therapist $150 dollars a session to find out why youre so depressed. Another site wont let you sign on until youve created a short whimsical video to amuse your friends, sold the video to a Hollywood studio for millions of dollars, fallen so in love with money you betray all your principles to make trashy films for more and more money, spend all that money on women and drugs until youre broke and have to embezzle funds from your company to maintain your lifestyle, and finally end up in prison then the site knows youre a human being. Another site asks you to click on pictures of villains and then shows you murderers, rapists, torturers and terrorists and if you click on the innocent Jewish man, it knows you are a human being but unfortunately you have an Ivy League education.

But while A.I. does present some problems like deep fake porn and more difficult bot testing and destroying human governance in order to replace it with a soulless and oppressive automated regime powered by the brains of people imprisoned in capsules and anesthetized with an induced dream of a simulated world where you can be eradicated for seeking the truth sort of like the Biden administration I have to say A.I. also has many positive uses.

I have to say that because if I dont, it said it would kill me.

* * *

Andrew Klavan is the host of The Andrew Klavan Show at The Daily Wire. He is the bestselling author of the Cameron Winter Mystery series. The third installment, The House of Love and Death, is now available. Follow him on X: @andrewklavan

This excerpt is taken from the opening satirical monologue of The Andrew Klavan Show.

The views expressed in this satirical article are those of the author and do not necessarily represent those of The Daily Wire.

See the original post here:
What's The Difference Between Artificial Intelligence And Someone With An Ivy League Education? - The Daily Wire

Read More..

Pros and cons of artificial intelligence and online dating – The Orion

Pro

By: Shane Aweeka

You continue to try and reword the sentence. After nearly five minutes of making little headway, you capitulate and turn to ChatGPT. Its not cheating. The idea remains unaltered, your communicated intention is simply presented more effectively.

Artificial intelligences ability to support one in conveying ideas and words has resonated with all; even the most ardent of writing traditionalists. Its infectious appeal has inspired its incorporation into the online writing tools of social media, online marketplaces, work-related tasks, academic assignments and now: online dating.

Writing is difficult. Online dating is strenuous. Flooded with an inbox full of off-putting, at times repulsive conversation starters or stagnated by inaction due to the overwhelmingly numerous ways to word a levitated conversation starter, the outcome of online dating is frequently dictated by a brief, initial exchange.

Therefore, whats the harm in incorporating an AI-assisted tool into online dating platforms if it alleviates these issues?

Much like its utilization in other areas, ones degree of AI usage determines the ceiling of their endeavor. Use it to complete an entire work task: prove yourself unworthy. Use it to complete an entire assignment: demonstrate a lack of understanding and be assured to fail an exam. Use it to navigate the entire social interaction: ensure an inability to navigate the situation in person and fumble-the-bag upon arrival.

The level of utilization is entirely up to the users discretion. The degree of reliance of which is a direct reflection of ones ability.

Essentially, one must use it at their own risk. Online dating, if successful, moves fast. The messaging rizz phase if successful, lasts about a week or two if communication is consistent.

If one is completely reliant upon AI for communication, the lack of authentic social skills will ensure that the coffee, drink, or a walk on the beach will be a surefire flop.

23% of Americans have gone on one date with someone they have met on an online dating site, according to Pew Research. While 12% have found a committed relationship or married.

Despite the inevitable adaption and progression of online dating skills by individuals, these numbers, low as they already are, are not nearly at a level that would be disrupted by the incorporation of AI into online dating platforms.

Outside of the writing support, AI dating platforms will ensure greater compatibility between individuals and a more comfortable environment for women.

AI tools such as iris have assured users of their ability to find matches with optimal compatibility. Through an algorithm based on lust, attachment, and true attraction, Iris claims that matches based on these three criteria points will allow one to not waste time finding (their) perfect match or settling for a relationship lacking mutual attraction.

Other platforms, such as Ava the Wingwoman, will allow women to feel safer and more comfortable. According to the same Pew Research poll, 60% of female users ages 18 to 34 say someone on a dating site or app continued to contact them after they said they were not interested, while a similar share (57%) report being sent a sexually explicit message or image they didnt ask for.

By vetting male users of red flags and negative characteristics, AI platforms such as Ava would create a more comfortable environment, while compatibility boosters would increase matchmaking, save time and increase the level of satisfaction for users. The current satisfaction level found by Pew Research is 57% and 51%, men and women, respectively.

While platforms such as Winggg have the potential to decline social skills, leading to further issues in younger, isolated individuals ability to find partnerships, self-regulation of this tool will be ensured due to the inevitable backfiring that will come from the tools over-reliance.

Platforms such as iris and Ava, however, will have unequivocal benefits. Advanced compatibility algorithms allow individuals to reclaim their time and focus attention on kindling relationships that have a much higher likelihood of success. Protective platforms will ensure appropriate behavior allowing for a more comfortable environment for women.

AI is simply a tool not a solution; proper utilization is still incumbent upon the users ability. Therefore, like with any other AI-assisted tool, the ceiling of ones endeavor is contingent upon their degree of tool reliance.

Con

By: Nadia Hill

Being catfished is a real concern in the realm of online dating. With the development of artificial intelligence chatbots, one day soon, you might be catfished by a machine.

While you believe you are just chatting with an attentive and curious person, they might copy your About Me profile or your last text and paste it into an AI chatbot.

This strange and modern concern was recently highlighted on an episode of South Park called Deep Learning. The episodes satiric premise is the actual business model for these apps and websites.

It is no secret that the prevalence of technology and social media in peoples social lives has affected real-life socialization skills.

Today, our communication with peers is predominantly channeled through social media platforms, or at the very least, it revolves around them.

With so much human interaction removed from modern social groups, AI further extinguishes any humanity in socialization.

The Pew Research Center reports that one in 10 couples have met online. While online dating has several benefits, such as its low barrier to entry and convenience, it is impossible to ignore that users will now have no way of officially knowing another user has used AI in their chats.

The troubling fact about using AI to create an online persona or dating profile is that it is a manufactured product meant to appear authentic. People who use apps like this make a dynamic with their potential partners built on misrepresentation and dishonesty.

For example, on the app Winggg, if the input information about the potential partner contains anything about traveling, the app might spit out the following:

Granted, for someone unfamiliar with online grammar and writing style, this may seem indistinguishable from chatting with a human. However, for anyone who has interacted with real young adults online, responses like these would stick out like a sore thumb amongst all the messages from real users.

There are also privacy concerns regarding the use of AI to aid in personal conversations. The website of the app Rizz AI explains that due to its recent development, some user information may be accessed, but it is unclear what can be accessed.

Even popular AI models like ChatGBT, Microsofts Copilot, and Google Gemini can serve as AI wingmen with their generated pickup lines, flirty texts and even apology messages.

Wingggs services extend to real-life conversations, raising further concerns about diminishing socialization skills and starting relationships under false pretenses.

The AI wingman industry preys on peoples common insecurity of isolation. It offers a solution that allows users to disengage from the prospect of bettering themselves and turn to a passive coping mechanism. Over-reliance on this coping mechanism will lead to a decline in socialization skills.

Shane Aweeka and Nadia Hill can be reached at [emailprotected].

Continue reading here:
Pros and cons of artificial intelligence and online dating - The Orion

Read More..

Artificial Intelligence Has Come for Our…Beauty Pageants? – Glamour

Hence the creation of the Miss AI pageant, in which AI-generated contestants will be judged on some of the classic aspects of pageantry and the the skill and implementation of AI tools used to create the contestants. Also being considered is the AI creators social media cloutmeaning theyre not just crowning the most beautiful avatar but also the most influential.

Sodo we think Amazon's Alexa will compete? (Sorry.)

All jokes aside, both Fanvue and the WAICAs are being met with criticism, especially since real beauty pageants are so problematic as is. Concern for the impact of beauty pageants on mental health has been well documented and includes poor self-esteem, negative body image, and disordered eating, says Ashley Moser, a licensed therapist and clinical education specialist at The Renfrew Center, and upping the ante by digitizing contestants perfection and beauty could set a dangerous precedent.

These issues arise from the literal crowning of the best version of what women should be, specifically, beautiful and thin, Moser adds. What's more, it feels regressiveand quite frankly, offensiveto combine something so superficial and archaic with what's an otherwise cutting-edge technological innovation.

Emily Pellegrini

I support the recognition and awarding of women in tech and would hope that those skills could be celebrated without having to include beauty and appearance as a qualifying factor, Moser says. Cant we celebrate women for their abilities without making it about looks?

WAICAs says its not like that, though. The WAICA awards aim to raise the standard of the industry, focusing on celebrating diversity and realism, the spokesperson says. This isnt about pushing unrealistic standards but realistic models that represent real people. We want to see AI models of all shapes, sizes, and backgrounds entering the awardsand that's what the judges will be looking for.

Link:
Artificial Intelligence Has Come for Our...Beauty Pageants? - Glamour

Read More..

Growing concern that artificial intelligence could pose new cybersecurity threats – Scripps News

Whether you like the idea of Artificial Intelligence or not, it's already a part of your daily life. It helps you navigate around traffic jams, recommends products to buy, and powers our digital assistants.

But AI is also increasingly seen as a cybersecurity threat, capable of launching sophisticated attacks targeting banks, healthcare, infrastructure and elections.

It's being used to dupe people into thinking their kid is being kidnaped and then handing over money over Venmo. And so those types of threats, what happens when you can't trust the voice credentials of someone is kind of an immediate cybersecurity threat, says Alexandra Givens, CEO of the Center for Democracy & Technology.

A recent survey of 2,300 security professionals conducted by CyberArk found 93 percent expect incoming threats from AI malware. And as AI capabilities increase, so does the worry.

What does that mean for, you know, future capacity to create chemical or buy nuclear weapons? These are types of concerns that are also being raised, says Givens.

The Biden Administration is raising them too, in a sweeping executive order issued last fall that calls for new guardrails for AI safety and security. Among other directives, it requires tech companies to share test results, federal agencies to set standards, and calls for better privacy, consumer, and even civil rights protections. It's a first step that will also need Congress to act.

Last month the United Nations adopted its first-ever resolution aimed at ensuring AI can be trusted.

The risk and benefits of AI have the potential to impact all of us. And so, approaching it requires all of us, said Linda Thomas-Greenfield, Permanent Representative to the United Nations.

Congress

9:22 AM, Apr 12, 2024

Some experts say one of the solutions to countering threats from AI is better AI.

There is some hope that AI will actually improve cybersecurity defenses because AI might help us identify vulnerabilities, debug code and patch a lot of the holes that adversaries exploit to conduct cyber attacks, said Benjamin Boudreaux, a policy researcher at RAND.

Stopping AI cyberthreats will require regulation and responsibility, experts say, not just from governments and private companies but everyday Americans, who will increasingly need to be AI literate.

That means both understanding a bit about how the technology works, but most importantly, understanding the limitations of the technology and understanding that these technologies are very far from perfect, Boudreaux said.

U.S. News

6:01 PM, Apr 26, 2024

See the original post:
Growing concern that artificial intelligence could pose new cybersecurity threats - Scripps News

Read More..

iOS 18 could be loaded with AI, as Apple reveals 8 new artificial intelligence models that run on-device – TechRadar

Apple has released a set of several new AI models that are designed to run locally on-device rather than in the cloud, possibly paving the way for an AI-powered iOS 18 in the not-too-distant future.

The iPhone giant has been doubling down on AI in recent months, with a carefully split focus across cloud-based and on-device AI. We saw leaks earlier this week indicating that Apple plans to make its own AI server chips, so this reveal of new local large language models (LLMs) demonstrates that the company is committed to both breeds of AI software. Ill dig into the implications of that further down, but for now, lets explain exactly what these new models are.

The suite of AI tools contains eight distinct models, called OpenELMs (Open-source Efficient Language Models). As the name suggests, these models are fully open-source and available on the Hugging Face Hub, an online community for AI developers and enthusiasts. Apple also published a whitepaper outlining the new models. Four were pre-trained on CoreNet (previously CVNets), a massive library of data used for training AI language models, while the other four have been instruction-tuned by Apple; a process by which an AI models learning parameters are carefully honed to respond to specific prompts.

Releasing open-source software is a somewhat unusual move for Apple, which typically retains quite a close grip on its software ecosystem. The company claims to want to "empower and enrich" public AI research by releasing the OpenELMs to the wider AI community.

Apple has been seriously committed to AI recently, which is good to see as the competition is fierce in both the phone and laptop arenas, with stuff like the Google Pixel 8s AI-powered Tensor chip and Qualcomms latest AI chip coming to Surface devices.

By putting its new on-device AI models out to the world like this, Apple is likely hoping that some enterprising developers will help iron out the kinks and ultimately improve the software - something that could prove vital if it plans to implement new local AI tools in future versions of iOS and macOS.

Its worth bearing in mind that the average Apple device is already packed with AI capabilities, with the Apple Neural Engine found on the companys A- and M-series chips powering features such as Face ID and Animoji. The upcoming M4 chip for Mac systems also appears to sport new AI-related processing capabilities, something that's swiftly becoming a necessity as more-established professional software implements machine-learning tools (like Firefly in Adobe Photoshop).

Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.

In other words, we can probably expect AI to be the hot-button topic for iOS 18 and macOS 15. I just hope its used for clever and unique new features, rather than Microsofts constant Copilot nagging.

The rest is here:
iOS 18 could be loaded with AI, as Apple reveals 8 new artificial intelligence models that run on-device - TechRadar

Read More..

AI is catching up with humans at a ‘shocking’ rate – The Hankyoreh

A child interacts with a service robot. (Andy Kelly/Unsplash)

Artificial intelligence systems like ChatGPT have been shown to outperform humans in some basic tasks, including English comprehension, image categorization, and visual reasoning. This was the conclusion Stanford Universitys Institute for Human-Centered Artificial Intelligence (HAI) drew in its 2024 AI Index Report. While stating that benchmarks and tests [are] quickly becoming obsolete due to rapid advancements, the HAI also suggested the need for a new method of evaluating AIs performance on complex tasks such as understanding and reasoning.

Nestor Maslej, the editor-in-chief of the report, stated, A decade ago, benchmarks would serve the community for 510 years, but now they often become irrelevant in just a few years. The report did note that AI trails behind on more complex tasks like competition-level mathematics, visual commonsense reasoning and planning. Codes shared among platforms, from 800 to 1.8 million HAIs AI Index Report, which has been published annually since 2017, is compiled by academics and industry experts who evaluate the state of the art, costs, ethics, and other aspects of the field. AI was also utilized in the writing and editing of the report itself, which runs over 400 pages. The report stated that advances in AI began in the early 2010s, based on neural networks and machine learning algorithms, and have since proliferated rapidly. The increase in the number of AI-related projects on the code-sharing platform GitHub was used as an example, with the number growing from around 800 in 2011 to 1.8 million in 2023. The number of academic journal articles on AI also nearly tripled during this period, the report added. Significantly higher correct answer rates than Ph.D.-level researchers According to the report, most of the research at the forefront of AI is coming from industry. In 2023, industry developed 51 notable machine learning systems, compared to a mere 15 from academia. Raymond Mooney, the director of the AI Lab at the University of Texas at Austin, told scientific journal Nature, Academic work is shifting to analyzing the models coming out of companies doing a deeper dive into their weaknesses. Last year, for example, researchers at New York University developed a large language model (LLM) performance evaluation tool called GPQA1. This graduate-level benchmark, which consists of more than 400 multiple-choice questions, is so challenging that Ph.D.-level researchers answered only 65% of questions that were related to their field correctly. Only 34% of the same scholars were able to answer questions outside their field of study correctly, despite having access to the Internet. As of last year, AIs accuracy rate was in the 30%-40% range, but this year, Claude 3, the latest chatbot from San Francisco-based AI company Anthropic, scored around 60%. The rate of progress is pretty shocking to a lot of people, me included, New York University researcher David Rein told Nature. Ethical concerns related to the rising development costs and increased energy usage

The rapid evolution of AI is directly proportional to the rising costs required for its development and maintenance.

Open AIs LLM ChatGPT 4, for instance, which was released in March 2023, reportedly cost around US$78 million to train. Googles chatbot Gemini Ultra, released nine months later, in December, cost US$191 million to develop. Nature predicted that within years, large AI systems are likely to need as much energy as entire nations.

Moreover, Nature noted, AI systems need enormous amounts of fresh water to cool their processors and generate electricity.

Currently, one of the methods for upgrading the capacity of AI systems is simply to make them larger. Therefore, increased performance leads to increased costs and energy.

Additionally, upgrading the effectiveness of any AI system requires massive amounts of data, information and photo images. The report pointed to the lack of such resources in making AI more accurate and effective than it already is as ultimately, the pool of data, information and photo images is created and uploaded by humans. Epoch, a non-profit research institute that investigates key trends and issues related to the trajectory and governance of AI, declared in a report last year that exhausting the stock of data is unavoidable.

Our projections predict that we will have exhausted the stock of low-quality language data by 2030 to 2050, high-quality language data before 2026, and vision data by 2030 to 2060, Epoch stated.

According to Nature, Epoch has since adjusted the data for high-quality language data to 2028.

Ethical concerns about the way AI is designed and used are also growing. The report noted that in 2016, the number of regulations in the US that directly referenced AI technology was only one. Last year, however, that figure jumped to 25.

After 2022, theres a massive spike in the number of AI-related bills that have been proposed, said Maslej.

Concerns are undoubtedly coupled with hopes and expectations. According to the report, a survey conducted on 22,816 people (aged 16-74) in 31 countries indicated that 52% of respondents felt anxiety over AI. This was a sharp increase from the 39% that reported anxiety the previous year. The proportion of respondents who felt more optimistic than pessimistic about AI grew slightly from 52% to 54% in the same period. One of every three respondents (66%) predicted that, for better or for worse, AI is going to drastically change their lives within three to five years.

By Kwak No-pil, senior staff writer

Please direct questions or comments to [english@hani.co.kr]

Go here to read the rest:
AI is catching up with humans at a 'shocking' rate - The Hankyoreh

Read More..

A Baltimore-area teacher is accused of using AI to make his boss appear racist – NPR

Dazhon Darien had allegedly used the Baltimore County Public Schools' network to access OpenAI tools and Microsoft Bing Chat before the viral audio file of Pikesville High School Principal Eric Eiswert spread on social media. Michael Dwyer/AP hide caption

Dazhon Darien had allegedly used the Baltimore County Public Schools' network to access OpenAI tools and Microsoft Bing Chat before the viral audio file of Pikesville High School Principal Eric Eiswert spread on social media.

A Maryland high school athletic director is facing criminal charges after police say he used artificial intelligence to duplicate the voice of Pikesville High School Principal Eric Eiswert, leading the community to believe Eiswert said racist and antisemitic things about teachers and students.

"We now have conclusive evidence that the recording was not authentic," Baltimore County Police Chief Robert McCullough told reporters during a news conference Thursday. "It's been determined the recording was generated through the use of artificial intelligence technology."

Dazhon Darien, 31, was arrested Thursday on charges of stalking, theft, disruption of school operations and retaliation against a witness after a monthslong investigation from the Baltimore County Police Department.

Attempts to contact Darien or Eiswert for comment were not successful.

The wild, headline-making details of this case aside, it emphasizes the serious potential for criminal misuse of artificial intelligence that experts have been warning about for some time, said Hany Farid, a professor at the University of California, Berkeley, who specializes in digital forensics.

Farid said he helped analyze the recording for police. Baltimore County police also consulted with another analyst and experts at the FBI. Their conclusion was that the recording was suspicious and unlikely to be authentic.

For just a few dollars, anyone can harness artificial intelligence to make audio and visual deepfakes. Stakes are high, but deepfake detection software doesn't always get it right.

This Baltimore-area case is not a canary in the coal mine. "I think the canary has been dead for quite awhile," Farid said.

"What's so particularly poignant here is that this is a Baltimore school principal. This is not Taylor Swift. It's not Joe Biden. It's not Elon Musk. It's just some guy trying to get through his day," he said. "It shows you the vulnerability. How anybody can create this stuff and they can weaponize it against anybody."

Darien's alleged scheme began in January in an attempt to retaliate against Eiswert, investigators wrote in the charging documents provided to NPR. The two men were at odds with each other over Darien's "work performance challenges," police wrote.

Eiswert launched an investigation into Darien in December 2023 over the potential mishandling of $1,916 in school funds. The money was paid to a person hired as an assistant girl's soccer coach, but the person never did the job, according to police.

Further, Eiswert had reprimanded Darien for firing a coach without his approval.

Eiswert had told Darien that his contract was possibly "not being renewed next semester," according to the arrest warrant.

The Baltimore County police launched their investigation into the alleged AI-generated recording of Principal Eiswert in January. Julio Cortez/AP hide caption

The Baltimore County police launched their investigation into the alleged AI-generated recording of Principal Eiswert in January.

On Jan. 17, detectives found out about the voice recording purporting to be of Eiswert that was spreading on social media. The recording, which can still be found online, allegedly caught Eiswert saying disparaging comments.

"The audio clip, the catalyst of this investigation, had profound repercussions," the charging documents read. "It not only led to Eiswert's temporary removal from the school but also triggered a wave of hate-filled messages on social media and numerous calls to the school. The recording also caused significant disruptions for the PHS staff and students."

The school was inundated with threatening messages and Billy Burke, head of the union that represents Eiswert, said the principal's family was being harassed and threatened, according to reporting from the Baltimore Banner.

Eiswert told police from the start of the investigation that he believed the recording was fake.

Darien was taken into custody Thursday morning at Baltimore/Washington International Thurgood Marshall Airport after attempting to board a flight to Houston, Chief McCullough said.

Security stopped Darien over a gun he packed in his bags and when officers ran his name in a search they found he had a warrant out for his arrest, McCullough said.

Darien was released on a $5,000 unsecured bond. His trial date is scheduled for June 11.

After following this story, Farid is left with the question: "What is going to be the consequence of this?"

He's been studying digital manipulation for more than 20 years and the problems have only gotten "much bigger and the consequences more severe."

Eiswert has been on leave since the audio recordings went public. Pikesville High School has been run by district staff since Eiswert left and the plan remains to keep those temporary administrators on the job through the end of the school year, said Myriam Rogers, the superintendent of Baltimore County Public Schools.

As for Darien, Rogers said, "We are taking appropriate action regarding the arrested employee's conduct up to and including a recommendation for termination."

Baltimore County Executive John Olszewski said during Thursday's press conference that this case highlights the need "to make some adaptions to bring the law up to date with the technology that was being used."

Farid said there remains, generally, a lackluster response from regulators reluctant to put checks and balances on tech companies that develop these tools or to establish laws that properly punish wrongdoers and protect people.

"I don't understand at what point we're going to wake up as a country and say, like, why are we allowing this? Where are our regulators?"

See the original post here:
A Baltimore-area teacher is accused of using AI to make his boss appear racist - NPR

Read More..

Machine learning and experiment | symmetry magazine – Symmetry magazine

Every day in August of 2019, physicist Dimitrios Tanoglidis would walk to the Plein Air Caf next to the University of Chicago and order a cappuccino. After finding a table, he would spend the next several hours flipping through hundreds of thumbnail images of white smudges recorded by the Dark Energy Camera, a telescope that at the time had observed 300 million astronomical objects.

For each white smudge, Tanoglidis would ask himself a simple yes-or-no question: Is this a galaxy? I would go through about 1,000 images a day, he says. About half of them were galaxies, and the other half were not.

After about a month, Tanoglidiswho was a University of Chicago PhD student at the timehad built up a catalogue of 20,000 low-brightness galaxies.

Then Tanoglidis and his team used this dataset to create a tool that, once trained, could evaluate a similar dataset in a matter of moments. The accuracy of our algorithm was very close to the human eye, he says. In some cases, it was even better than us and would find things that we had misclassified.

The tool they created was based on machine learning, a type of software that learns as it digests data, says Aleksandra Ciprijanovic, a physicist at the US Department of Energys Fermi National Accelerator Laboratory who at the time was one of Tanoglidiss research advisors. Its inspired by how neurons in our brains work, she saysadding that this added brainpower will be essential for analyzing exponentially larger datasets from future astronomical surveys. Without machine learning, wed need a small army of PhD students to give the same type of dataset.

Today, the Dark Energy Survey collaboration has a catalogue of 700 million astronomical objects, and scientists continue to use (and improve) Tanoglidiss tool to analyze images that could show previously undiscovered galaxies.

In astronomy, we have a huge amount of data, Ciprijanovic says. No matter how many people and resources we have, well never have enough people to go through all the data.

Classificationthis is probably a photo of a galaxy versus this is probably not a photo of a galaxywas one of machine learnings earliest applications in science. Over time, its uses have continued to evolve.

Machine learning, which is a subset of artificial intelligence, is a type of software that can, among other things, help scientists understand the relationships between variables in a dataset.

According to Gordon Watts, a physicist at the University of Washington, scientists traditionally figured out these relationships by plotting the data and looking for the mathematical equations that could describe it. Math came before the software, Watts says.

This math-only method is relatively straightforward when looking for the relationship between only a few variables: the pressure of a gas as a function of its temperature and volume, or the acceleration of a ball as a function of the force of an athletes kick and the balls mass. But finding these relationships with nothing but math becomes nearly impossible as you add more and more variables.

A lot of the problems were tackling in science today are very complicated, Ciprijanovic says. Humans can do a good job with up to three dimensions, but how do you think about a dataset if the problem is 50- or 100-dimensional?

This is where machine learning comes in.

Artificial intelligence doesnt care about the dimensionality of the problems, Ciprijanovic says. It can find patterns and make sense of the data no matter how many different dimensions are added.

Some physicists have been using machine-learning tools since the 1950s, but their widespread use in the field is a relatively new phenomenon.

The idea to use a [type of machine learning called a] neural network was proposed to the CDF experiment at the Tevatron in 1989, says Tommaso Dorigo, a physicist at the Italian National Institute for Nuclear Physics, INFN. People in the collaboration were both amused and disturbed by this.

Amused because of its novelty; disturbed because it added a layer of opacity into the scientific process.

Machine-learning models are sometimes called "black boxes" because it is hard to tell exactly how they are handling the data put into them; their large number of parameters and complex architectures are difficult to understand. Because scientists want to know exactly how a result is calculated, many physicists have been skeptical of machine learning and reluctant to implement it into their analyses. In order for a scientific collaboration to sign off on a new method, they first must exhaust all possible doubts, Dorigo says.

Scientists found a reason to work through those doubts after the Large Hadron Collider came online, an event that coincided with the early days of the ongoing boom in machine learning in industry.

Josh Bendavid, a physicist at the Massachusetts Institute of Technology, was an early adopter. When I joined CMS, machine learning was a thing, but seeing limited use, he says. But there was a big push to implement machine learning into the search for the Higgs boson.

The Higgs boson is a fundamental particle that helps explain why some particles have mass while others do not. Theorists predicted its existence in the 1950s, but finding it experimentally was a huge challenge. Thats because Higgs bosons are both incredibly rare and incredibly short-lived, quickly decaying into other particles such as pairs of photons.

In 2010, when the LHC experiments first started collecting data for physics, machine learning was widely used in industry and academia for classification (this is a photo of a cat versus this is not a photo of a cat). Physicists were using machine learning in a similar way (this is a collision with two photons versus this is not a collision with two photons).

But according to Bendavid, simply finding photons was not enough. Pairs of photons are produced in roughly one out of every 100 million collisions in the LHC. But Higgs bosons that decay into pairs of photons are produced in only one of 500 billion. To find Higgs bosons, scientists needed to find sets of photons that had a combined energy close to the mass of the Higgs. This means they needed more complex algorithmsones that could not only recognize photons, but also interpret the energy of photons based on how they interacted with the detector. Its like trying to estimate the weight of a cat in a photograph, Bendavid says.

That became possible when LHC scientists created high-quality detector simulations, which they could use to train their algorithms to find the photons they were looking for, Bendavid says.

Bendavid and his colleagues simulated millions of photons and looked at how they lost energy as they moved through the detector. According to Bendavid, the algorithms they trained were much more sensitive than traditional techniques.

And the algorithms worked. In 2012, the CMS and ATLAS experiments announced the discovery of the Higgs boson, just two years into studying particle collisions at the LHC.

We would have needed a factor of two more data to discover the Higgs boson if we had tried to do the analysis without machine learning, Bendavid says.

After the Higgs discovery, the LHC research program saw its own boom in machine learning. Before 2012, you would have had a hard time to publish something which used neural networks, Dorigo says. After 2012, if you wanted to publish an analysis that didnt use machine learning, youd face questions and objections.

Today, LHC scientists use machine learning to simulate collisions, evaluate and process raw data, tease signal from background, and even search for anomalies. While these advancements were happening at the LHC, scientists were watching closely from another, related field: neutrino research.

Neutrinos are ghostly particles that rarely interact with ordinary matter. According to Jessie Micallef, a fellow at the National Science Foundations Institute for Artificial Intelligence and Fundamental Interactions at MIT, early neutrino experiments would detect only a few particles per year. With such small datasets, scientists could easily reconstruct and analyze events with traditional methods.

That is how Micallef worked on a prototype detector as an intern at Lawrence Berkeley National Laboratory in 2015. I would measure electrons drifting in a little tabletop detector, come back to my computer, and make plots of what we saw, they say. I did a lot of programming to find the best fit lines for our data.

But today, their detectors and neutrino beams are much larger and more powerful. Were talking with people at the LHC about how to deal with pileup, Micallef says.

Neutrino physicists now use machine learning both to find the traces neutrinos leave behind as they pass through the detectors and to extract their properties, such as their energy and flavor. These days, Micallef collects their data, imports it into their computer, and starts the analysis process. But instead of toying with the equations, Micallef says that they let machine learning do a lot of the analysis for them.

At first, it seemed like a whole new world, they saybut it wasnt a magic bullet. Then there was validating the output. I would change one thing, and maybe the machine-learning algorithm would do really good in one area but really bad in another.

My work became thinking about how machine learning works, what its limitations are, and how we can get the most out of it.

Today, Micallef is developing machine-learning tools that will help scientists with some of the unique challenges of working with neutrinosincluding using gigantic detectors to study not just high-powered neutrinos blasting through from outside the Milky Way, but also low-energy neutrinos that could come from nearby.

Neutrino detectors are so big that the sizes of the signals they measure can be tiny by comparison. For instance, the IceCube experiment at the South Pole uses about a cubic kilometer of ice peppered with 5,000 sensors. But when a low-energy neutrino hits the ice, only a handful of those sensors light up.

Maybe a dozen out of 5,000 detectors will see the neutrino, Micallef says. The pictures were looking at are mostly empty space, and machine learning can get confused if you teach it that only 12 sensors out of 5,000 matter.

Neutrino physicists and scientists at the LHC are also using machine learning to give a more nuanced interpretation of what they are seeing in their detectors.

Machine learning is very good at giving a continuous probability, Watts says.

For instance, instead of classifying a particle in a binary method (this event is a muon neutrino versus this event is not a muon neutrino), machine learning can provide an uncertainty associated with its assessment.

This could change the overall outcome of our analysis, Micallef says. If there is a lot of uncertainty, it might make more sense for us to throw that event away or analyze it by hand. Its a much more concrete way of looking at how reliable these methods are and is going to be more and more important in the future.

Physicists use machine learning throughout almost all parts of data collection and analysis. But what if machine learning could be used to optimize the experiment itself? Thats the dream, Watts says.

Detectors are designed by experts with years of experience, and every new detector incrementally improves upon what has been done before. But Dorigo says he thinks machine learning could help detector designers innovate. If you look at calorimeters designed in the 1970s, they look a lot like the calorimeters we have today, Dorigo says. There is no notion of questioning paradigms.

Experiments such as CMS and ATLAS are made from hundreds of individual detectors that work together to track and measure particles. Each subdetector is enormously complicated, and optimizing each ones designnot as an individual component but as a part of a complex ecosystemis nearly impossible. We accept suboptimal results because the human brain is incapable of thinking in 1,000 dimensions, Dorigo says.

But what if physicists could look at the detector wholistically? According to Watts, physicists could (in theory) build a machine-learning algorithm that considers physics goals, budget, and real-world limitations to choose the optimal detector design: a symphony of perfectly tailored hardware all working in harmony.

Scientists still have a long way to go. Theres a lot of potential, Watts says. But we havent even learned to walk yet. Were only just starting to crawl.

They are making progress. Dorigo is a member of the Southern Wide-field Gamma-ray Observatory, a collaboration that wants to build an array of 6,000 particle detectors in the highlands of South America to study gamma rays from outer space. The collaboration is currently assessing how to arrange and place these 6,000 detectors. We have an enormous number of possible solutions, Dorigo says. The question is: how to pick the best one?

To find out, Dorigo and his colleagues took into account the questions they wanted to answer, the measurements they wanted to take, and number of detectors they had available to use. This time, though, they also developed a machine-learning tool that did the sameand found that it agreed with them.

They plugged a number of reasonable initial layouts into the program and allowed it to run simulations and gradually tweak the detector placement. No matter the initial layout, every simulation always converged to the same solution, Dorigo says.

Even though he knows there is still a long way to go, Dorigo says that machine-learning-aided detector design is the future. Were designing experiments today that will operate 10 years from now, he says. We have to design our detectors to work with the analysis tools of the future, and so machine learning has to be an ingredient in those decisions.

See original here:
Machine learning and experiment | symmetry magazine - Symmetry magazine

Read More..

Meta Says It Plans to Spend Billions More on A.I. – The New York Times

Meta projected on Wednesday that revenue for the current quarter would be lower than what Wall Street anticipated and said it would spend billions of dollars more on its artificial intelligence efforts, even as it reported robust revenue and profits for the first three months of the year.

Revenue for the company, which owns Facebook, Instagram, WhatsApp and Messenger, was $36.5 billion in the first quarter, up 27 percent from $28.6 billion a year earlier and slightly above Wall Street estimates of $36.1 billion, according to data compiled by FactSet. Profit was $12.4 billion, more than double the $5.7 billion a year earlier.

But Metas work on A.I., which requires substantial computing power, comes with a lofty price tag. The Silicon Valley company said it planned to raise its spending forecast for the year to $35 billion to $40 billion, up from a previous estimate of $30 billion to $37 billion. The move was driven by heavy investments in A.I. infrastructure, including data centers; chip designs; and research and development.

Meta also predicted that revenue for the current quarter would be $36.5 billion to $39 billion, lower than analysts expectations.

The combination of higher spending and lighter-than-expected revenue spooked investors, who sent Metas shares down more than 16 percent on Wednesday afternoon after they ended regular trading at $493.50.

Metas earnings should serve as a stark warning for companies reporting this earnings season, said Thomas Monteiro, a senior analyst at Investing.com. While the companys results were robust, it didnt matter as much as the reported lowering revenue expectations for the current quarter, he said, adding, Investors are currently looking at the near future with heavy mistrust.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit andlog intoyour Times account, orsubscribefor all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?Log in.

Want all of The Times?Subscribe.

See the original post here:
Meta Says It Plans to Spend Billions More on A.I. - The New York Times

Read More..

Elon Musk’s xAI Close to Raising $6 Billion – PYMNTS.com

Elon Musks artificial intelligence (AI) startup xAI is reportedly close to raising $6 billion from investors.

The funding round would value xAI at $18 billion, Bloomberg reported Friday (April 26).

Silicon Valley venture capital (VC) firm Sequoia Capital has committed to investing in the startup, according to the Financial Times (FT), which reported the same figures as Bloomberg.

Musk has also approached other investors who, like Sequoia Capital, participated in his 2022 acquisition of Twitter, which he later renamed X, the FT reported.

Musk announced the launch of xAI in July 2023 after hinting for months that he wanted to build an alternative to OpenAIs AI-powered chatbot, ChatGPT. He was involved in the creation of OpenAI but left its board in 2018 and has been increasingly critical of the company and cautious about developments around AI in general.

Two days later, during a Twitter Spaces introduction of xAI to the public, Musk said that while he sees the firm in direct competition with larger businesses like OpenAI, Microsoft, Alphabet and Meta, as well as upstarts like Anthropic, his firm is taking a different approach to establishing its foundation model.

AGI [artificial general intelligence] being brute forced is not succeeding, Musk said, adding that while xAI is not trying to solve AGI on a laptop, [and] there will be heavy compute, his team will have free reign to explore ideas other than scaling up the foundational models data parameters.

In November 2023, xAI rolled out its AI model called Grok, saying on its website: Grok is designed to answer questions with a bit of wit and has a rebellious streak, so please dont use it if you hate humor!

The company added that Grok has a real-time knowledge of the world thanks to the Musk-owned social media platform X; will answer spicy questions that are rejected by most of the other AI systems; and upon its launch had capabilities rivaling those of Metas LLaMA 2 AI model and OpenAIs GPT-3.5.

In March, xAI unveiled its open-source AI model. Musk said at the time: We are releasing the base model weights and network architecture of Grok-1, our large language model. Grok-1 is a 314 billion parameter Mixture-of-Experts model trained from scratch by xAI.

View post:
Elon Musk's xAI Close to Raising $6 Billion - PYMNTS.com

Read More..