Page 1,446«..1020..1,4451,4461,4471,448..1,4601,470..»

Research reveals how Artificial Intelligence can help look for alien lifeforms on Mars and other planets – WION

Aliens have long been a fascinating subject for humans. Innumerable movies, TV series and books are proof of this allure. Our search for extra-terrestrial has eventaken us to other planets, albeit remotely. This search has progressed leaps and bounds in the last few years, but it is still in its natal stages. Global space agencies like the National Aeronautics and Space Administration (NASA) and China National Space Administration (CNSA) have in recent years sent rovers to Mars to aid this search remotely. However, the accuracy of these random searches remains low.

COMMERCIAL BREAK

SCROLL TO CONTINUE READING

To remedy this, the Search for Extraterrestrial Intelligence (SETI) Institute has been exploring the use of artificial intelligence (AI) for finding extraterrestrial life on Mars and other icy worlds.

According to a report on Space, a recent study from SETIstates that AI could be used to detect microbial life in the depths of the icy oceans on other planets.

In a paper published in Nature Astronomy, the team details how they trained a machine-learning model to scan data for signs of microbial life or other unusual features that could be indicative of alien life.

Also read |Here's how Artificial Intelligence can help modern-day Goldilocks get a good night's sleep

Using a machine learning algorithm called convolutional neural networks (CNNs) a multidisciplinary team of scientists led by SETI's Kim Warren-Rhodes has mapped sparse lifeforms on Earth.Warren-Rhodes worked alongside experts from other prestigious institutions: Michael Phillips of Johns Hopkins Applied Physics Laband Freddie Kalaitzis of the Universityof Oxford.

The system developed by them used statistical ecology and AI-detected biosignatures with up to 87.5 per cent accuracy, compared to only 10 per cent for random searches. As per the researchers, itcanpotentially reduce the search area by up to 97 per cent, making it easier for scientists to locate potential chemical traces of life.

Also read |Up, Up, and Away!: Elon Musks SpaceX to try and launch Starship, its most powerful rocket ever on Monday

For testing their system,they initiallyfocused on the sparse lifeforms that dwell in salt domes, rocks, and crystals at Salar de Pajonales at the boundary of the Chilean Atacama Desert and Altiplano.

Warren-Rhodes and his team collected over 8,000 images and 1,000 samples from Salar de Pajonales to search for photosynthetic microbes that may represent a biosignature on NASA's "ladder of life detection" for finding life beyond Earth.

The team also used drone imagery to simulate Mars Reconnaissance Orbiter's High-Resolution Imaging Experiment camera's Martian terrain images to examine the region.

They found that microbial life in the region is concentrated in biological hotspots that strongly relate to the availability of water.

Researchers suggest that the machine learning tools developed can be used in robotic planetary missions like NASA's Perseverance Rover. The tools can guide rovers towards areas with a higher probability of having traces of alien life, even if they are rare or hidden.

"With these models, we can design tailor-made roadmaps and algorithms to guide rovers to places with the highest probability of harbouring past or present life no matter how hidden or rare," explained Warren-Rhodes.

(With inputs from agencies)

WATCH WION LIVE HERE

You can now write for wionews.com and be a part of the community. Share your stories and opinions with us here.

Original post:
Research reveals how Artificial Intelligence can help look for alien lifeforms on Mars and other planets - WION

Read More..

US Artificial Intelligence Regulations: Watch List for 2023 | Insights & … – Goodwin Procter

Companies are developing, deploying, and interacting with artificial intelligence (AI) technologies more than ever. At Goodwin, we are keeping a close eye on any regulations that may affect companies operating in this cutting-edge space.

For companies operating in Europe, the landscape is governed by a number of in force and pending EU legislative acts, most notably the EU AI Act, which is expected to be passed later this year; it was covered in our prior client alert here: EU Technology Regulation: Watch List for 2023 and Beyond. The United Kingdom has recently indicated that it may take a different approach, as discussed in our client alert on the proposed framework for AI regulation in the United Kingdom here: Overview of the UK Governments AI White Paper.

For companies operating in the United States, the landscape of AI regulation remains less clear. To date, there has been no serious consideration of a US analog to the EU AI Act or any sweeping federal legislation to govern the use of AI, nor is there any substantial state legislation in force (although there are state privacy laws that may extend to AI systems that process certain types of personal data).

That said, we have recently seen certain preliminary and sector-specific activity that gives clues about how the US federal government is thinking about AI and how it may look to govern it in the future. Specifically, the National Institute of Standards and Technology (NIST), the Federal Trade Commission (FTC), and the Food and Drug Administration (FDA) have all provided recent guidance. This client alert reviews this activity and is important reading for any business implementing or planning to implement AI technologies in the United States.

On January 26, 2023, NIST, an agency of the US Department of Commerce, released its Artificial Intelligence Risk Management Framework 1.0 (the RMF), as a voluntary, non-sector-specific, use-case-agnostic guide for technology companies that are designing, developing, deploying, or using AI systems to help manage the many risks of AI. Beyond risk management, the RMF seeks to promote trustworthy and responsible development and use of AI systems.

As the federal AI standards coordinator, NIST works with government and industry leaders both in the United States and internationally to develop technical standards to promote the adoption of AI, enumerated in the Technical AI Standards section on its website. In addition, Section 5301 of the National Defense Authorization Act for Fiscal Year 2021 directed NIST to develop a voluntary risk management framework for trustworthy AI systems, the RMF. Although the RMF is voluntary, it does provide good insights into the considerations the federal government is likely to take into account in any future regulation of AI and, as it evolves, it could eventually be adopted as an industry standard. We summarize the key aspects below.

A key recognition by the RMF is that humans typically assume AI systems are objective and high functioning. This assumption can inadvertently cause harm to people, communities, organizations, or broader ecosystems, including the environment. Enhancing the trustworthiness of an AI system can help mitigate the risk of this harm. The RMF defines trustworthiness as having seven defined characteristics:

The RMF also notes that AI systems are subject to certain unique risks, such as the following:

The RMF outlines four key functions to employ throughout the AI systems life cycle to manage risk and breaks down these core functions into further subcategory functions. The RMFs companion Playbook suggests the following action items to help companies implement these core functions:

In addition to NISTs release of the RMF, there has been some recent guidance from other bodies within the federal government. For example, the FTC has suggested it may soon increase its scrutiny on businesses that use AI. Notably, the FTC has recently issued various blog posts warning businesses to avoid unfair or misleading practices, including Keep your AI claims in check and Chatbots, deepfakes, and voice clones: AI deception for sale.

For companies interested in using AI technologies for healthcare-related decision making, the FDA has also announced its intention to regulate many AI-powered clinical decision support tools as devices. More information on those regulations can be found in our prior client alert available here: FDA Issues Final Clinical Decision Support Software Guidance.

While the above detailed recent actions from NIST, FTC, and FDA do provide some breadcrumbs related to what future US AI regulation may look like, there is no question that, at the moment, there are few hard and fast rules that US AI companies can look to in order to guide their conduct. It seems inevitable that regulation in some form will eventually emerge, but when that will occur is anybodys guess. Goodwin will continue to follow the developments and publish updates as they become available.

UPDATE:On April 13, 2022, the day after this alert was initially published, reports surfaced that US Senator Chuck Schumer is leading a congressional effort to establish US regulations on AI Reports indicated that Schumer has developed a framework for regulation that is currently being shared with and refined with the input of industry experts. Few details of the framework were initially available, but reports indicate that the regulations will focus on four guardrails: (1) identification of who trained the algorithm and who its intended audience is, (2) disclosure of its data source, (3) an explanation for how it arrives at its responses, and (4) transparent and strong ethical boundaries. (See: Scoop: Schumer lays groundwork for Congress to regulate AI (axios.com)). There is no clear timeline yet for when this framework may become established law, or if that will occur at all, but Goodwin will continue to track developments and publish alerts as they become available.

Read more here:
US Artificial Intelligence Regulations: Watch List for 2023 | Insights & ... - Goodwin Procter

Read More..

Amazon Joins the Rush Into Artificial Intelligence – Investopedia

Key Takeaways

Amazon (AMZN) became the latest big tech firm to go all-in onartificial intelligence (AI). The company announced that it is offering new AI language models through itsAmazon Web Services (AWS)cloud platform. Called Amazon Bedrock, the product will allow customers to boost their software with AI systems that create text, similar to OpenAI'sChatGPTchatbot.

Swami Sivasubramanian, vice president of Data and Machine Learning at AWS, said that Amazon's mission "is to make it possible for developers of all skill levels and for organizations of all sizes to innovate using generative AI." He indicated that this is just the beginning of what the company believes "will be the next wave of machine learning."

The competition in the AI field is heating up. In March, OpenAI released its latest version of ChatGPT, and Meta Platforms (META), Microsoft (MSFT), and Alphabet's (GOOGL) Google all recently introduced their moves into the sector.

Sivasubramanian added that "we are truly at an exciting inflection point in the widespread adoption of machine learning" and that most customer experiences and applications "will be reinvented with generative AI."

The news helped lift Amazon shares 4.7% on April 13.

More here:
Amazon Joins the Rush Into Artificial Intelligence - Investopedia

Read More..

What is the Next Big Step in Artificial Intelligence? – Analytics Insight

This article describes the next big step in Artificial Intelligence which may be instant videos

The Runway is one of several businesses developing artificial intelligence technology that may soon allow anyone to create videos by merely putting a few words into a box on a computer screen. Runway expects to launch its service to a select group of testers this week. The next big step in Artificial Intelligence seems to be Instant Videos.

They represent the next step in an industry competition to develop new varieties of artificial intelligence systems that some think might be the next great thing in technology, on par with web browsers or the iPhone. This competition includes industry heavyweights like Microsoft and Google as well as many smaller firms. The development of new video-generation technologies might speed up the work of filmmakers and other digital artists while also providing a rapid and innovative method for spreading false material online, making it even more difficult to determine what is true. The systems are illustrations of generative AI, which can produce text, images, and sounds quickly.

The first video-generation systems were introduced by Google and Meta, the parent company of Facebook, last year, but they were kept from the general public out of concern that they might one day be used to quickly and effectively disseminate false material.

Despite the hazards, Runways CEO, Cristobal Valenzuela, stated that he thought the technology was too vital to retain in a research lab. He declared that it was among the most astounding technology created in the last 100 years. People must use it, she said. Of course, the ability to edit and manipulate video and film is nothing new. It has been a practice among filmmakers for more than a century. Researchers and digital artists have been utilizing diverse methods.

The videos are only four seconds long, and if you pay close attention, you can see that they are choppy and indistinct. Images can occasionally be strange, twisted, and unsettling. The device is capable of fusing inanimate objects like telephones and balls with living creatures like dogs and cats. But if the correct cue is supplied, it creates videos that demonstrate the direction that technology is headed.

Runways system learns by examining digital material, in this example, pictures, videos, and captions that describe what those pictures show, similar to previous generative AI systems. Researchers are optimistic they can quickly advance and broaden the capabilities of this type of technology by training it on ever-larger volumes of data. Soon, according to experts, they will produce polished short films with dialogue and music.

The rest is here:
What is the Next Big Step in Artificial Intelligence? - Analytics Insight

Read More..

Siemens and Microsoft drive industrial productivity with generative … – Microsoft

Siemens and Microsoft are harnessing the collaborative power of generative artificial intelligence (AI) to help industrial companies drive innovation and efficiency across the design, engineering, manufacturing and operational lifecycle of products. To enhance cross-functional collaboration, the companies are integrating Siemens Teamcenter software for product lifecycle management (PLM) with Microsofts collaboration platform Teams and the language models in Azure OpenAI Service as well as other Azure AI capabilities. At Hannover Messe, the two technology leaders will demonstrate how generative AI can enhance factory automation and operations through AI-powered software development, problem reporting and visual quality inspection.

The integration of AI into technology platforms will profoundly change how we work and how every business operates, said Scott Guthrie, executive vice president, Cloud + AI, Microsoft. With Siemens, we are bringing the power of AI to more industrial organizations, enabling them to simplify workflows, overcome silos and collaborate in more inclusive ways to accelerate customer-centric innovation.

With the new Teamcenter app for Microsoft Teams, anticipated later in 2023, the companies are enabling design engineers, frontline workers and teams across business functions to close feedback loops faster and solve challenges together. For example, service engineers or production operatives can use mobile devices to document and report product design or quality concerns using natural speech. Through Azure OpenAI Service, the app can parse that informal speech data, automatically creating a summarized report and routing it within Teamcenter to the appropriate design, engineering or manufacturing expert. To foster inclusion, workers can record their observations in their preferred languages which is then translated into the official company language with Microsoft Azure AI. Microsoft Teams provides user-friendly features like push notifications to simplify workflow approvals, reduce the time it takes to request design changes and speed up innovation cycles. The Teamcenter app for Microsoft Teams can enable millions of workers who do not have access to PLM tools today to impact the design and manufacturing process more easily as part of their existing workflows.

Siemens and Microsoft are also collaborating to help software developers and automation engineers accelerate the code generation for Programmable Logic Controllers (PLC), the industrial computers that control most machines across the worlds factories. At Hannover Messe, the companies are demonstrating a concept for how OpenAIs ChatGPT and other Azure AI services can augment Siemens industrial automation engineering solutions. The showcase will highlight how engineering teams can significantly reduce time and the probability of errors by generating PLC code through natural language inputs. These capabilities can also enable maintenance teams to identify errors and generate step-by-step solutions more quickly.

Powerful, advanced artificial intelligence is emerging as one of the most important technologies for digital transformation, said Cedrik Neike, Member of the Managing Board of Siemens AG and CEO Digital Industries. Siemens and Microsoft are coming together to deploy tools like ChatGPT so we can empower workers at enterprises of all sizes to collaborate and innovate in new ways.

Detecting defects in production early is critical to prevent costly and time-consuming production adjustments. Industrial AI like computer vision enables quality management teams to scale quality control, identify product variances easier and make real-time adjustments even faster. In Hanover, teams will demonstrate how, using Microsoft Azure Machine Learning and Siemens Industrial Edge, images captured by cameras and videos can be analyzed by machine learning systems and used to build, deploy, run and monitor AI vision models on the shop floor.

This collaboration is part of the longstanding strategic relationship between Siemens and Microsoft, built on over 35 years of joint innovation with thousands of customers. Other areas of collaboration include Senseye on Azure, enabling companies to run predictive maintenance at enterprise scale and support for customers that seek to host their business applications in the Microsoft Cloud to run solutions from the Siemens Xcelerator open digital business platform, including Teamcenter, on Azure. Siemens is also partnering with Microsoft as part of its zero trust strategy.

Contact for journalistsMicrosoft Media Relations: WE Communications for Microsoft, (425) 638-7777,[emailprotected]

Siemens Digital Industries Software PR Team: [emailprotected]

About MicrosoftMicrosoft (Nasdaq MSFT @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.

Tags: artificial intelligence, Azure OpenAI, ChatGPT, Collaboration, Hannover Messe, Manufacturing, Microsoft Teams, Siemens

View original post here:
Siemens and Microsoft drive industrial productivity with generative ... - Microsoft

Read More..

Artificial Intelligence in the Workplace A New Civil Rights Frontier – JD Supra

When it comes to hiring qualified employees, a growing number of employers have started to rely on artificial intelligence (AI) to simplify the hiring process. At the same time, lawmakers across the country are scrutinizing the potential discriminatory impact of using AI in the workplace. As a result, there has been a significant increase in regulatory oversight and legislation both on a federal and state level. The concerns stem from the growing popularity of employer use of sourcing and recruiting platforms powered by AI and machine learning, as well as the use of algorithms in screening and/or interview software to analyze and rank job applicants. In fact, the Chair of the Equal Employment Opportunity Commission (EEOC), Charlotte Burrows, estimated in May 2022 that more than 80% of employers are using AI in some form in their work and employment decision-making.

Legislative and Regulatory Oversight

As a result of its concerns over the growing use of AI in employment decision-making, the EEOC has signaled that it will keep focusing on the use of AI in the workplace, calling it a new civil rights frontier. In the fall of 2021, the EEOC announced an initiative to ensure that the use of AI complies with federal civil rights laws. As part of this initiative, the EEOC stated that it planned to identify best practices and issue technical assistance to provide guidance on algorithmic fairness and the use of AI in employment decisions. In May 2022, the EEOC issued guidance for employers on complying with the Americans with Disabilities Act while using AI. On January 10, 2023, the EEOC released its 2023-2027 Draft Strategic Enforcement Plan (SEP) in the Federal Register, noting that one of its priorities would be eliminating barriers in recruitment and hiring, including by focusing on the use of automatic systems, including artificial intelligence or machine learning, to target advertisements, recruit applicants, or make or assist in hiring decisions where such systems intentionally exclude or adversely impact protected groups, as well as the use of screening tools or requirements that disproportionately impact workers based on their protected status, including those facilitated by artificial intelligence or other automated systems, pre-employment tests, and background checks. And, on March 30, 2023, EEOC Chair Burrows announced at an American Bar Association conference that additional guidance regarding use of AI in the workplace is forthcoming from the EEOC.

In addition, some states, including New York, California, Maryland, and Washington, have either enacted or are considering enacting legislation to address the use of AI in the recruitment process. In particular, the New York legislation, set to become effective July 2023, prohibits employers from using AI employment selection tools unless an organization institutes specific bias auditing and makes the resulting data publicly available. Employers would also be required to disclose their use of AI to job candidates who live in New York City.

Common Uses of AI in Employment Decision-Making

AI can assist employers in performing hiring tasks such as analyzing resumes, and it can even perform facial analysis in interviews to evaluate a candidates stability, optimism or attention span. While this can help streamline processes for employers, it can also create issues by enabling (even unintentionally) systemic discrimination and duplicating human biases. Although proponents of AI have said that AI will in fact eliminate human bias from the recruitment process, this is not always the case. For example, AI software may use algorithms to analyze a candidates facial movements, words, and speech patterns, and it could then evaluate these candidates by comparing their behaviors to other successful hires made by the Company. This may in turn inadvertently eliminate candidates with disabilities from the hiring process.

Further, if an employer utilizes a third-party vendor to provide AI services during the hiring process, it may be difficult for the employer to establish a level of control over the process and ensure that the vendors programs, processes, or algorithms are not resulting in unintentional discrimination. This is especially the case if the vendors programs or algorithms are identified as trade secrets or are otherwise confidential, as they may then be protected from disclosure to employers.

What are the Takeaways?

Employers need to be aware of the implications of the use of AI in hiring and should not assume that because AI technology is handling tasks such as applicant screening, they do not have to worry about preventing discrimination in the hiring process. Rather, employers need to be involved in understanding how these AI tools work and take steps to ensure that use of these tools does not disparately impact applicants in protected groups.

In addition, if employers utilize a third-party vendor to provide AI technology, they need to discuss these issues with the vendor and make sure there is transparency in the vendors processes regarding the elimination of bias when using their tools. EEOC Chair Burrows has noted that employers need to exercise due diligence and ask vendors whats under the hood of their algorithms before using them to vet candidates. For example, she has indicated that employers need to question vendors about whether any algorithm or other AI screening tool allows for reasonable accommodations in the hiring process, which is a requirement for employees with disabilities under the Americans with Disabilities Act. According to Burrows, if the vendor hasnt thought about that, isnt ready to engage in that, that should be a warning signal.

In sum, employers need to carefully weigh the use of AI in their screening and hiring processes.

Original post:
Artificial Intelligence in the Workplace A New Civil Rights Frontier - JD Supra

Read More..

Seminar to dive into developments in artificial intelligence – Brock University

Recent developments in artificial intelligence (AI) technologies, including the machine learning models behind DALL.E 2, ChatGPT and Metas new Segment Anything Model (SAM), will be discussed at a free seminar next week.

Organized by Brock University Graduate Mathematics and Science Students (GRAMSS) as part of its Seminar Series, the event will feature a presentation by Yifeng Li, Assistant Professor with Brocks Departments of Biological Sciences and Computer Science and Canada Research Chair in Machine Learning for Biomedical Data Science.

Lis talk will provide a comprehensive review of recent developments of advanced AI techniques over the past decade, particularly within the past two years.

Different AI learning paradigms and architectures will be introduced, such as:

Yifeng Li, Assistant Professor with Brock Universitys Departments of Biological Sciences and Computer Science, and Canada Research Chair in Machine Learning for Biomedical Data Science.

These models are the foundations of the well-known artificial intelligence systems DALL.E 2, which is commonly used for digital art image generation; ChatGPT chatbot; and SAM, which is used to select and cut out objects from within an image, Li said.

As part of his presentation, Li will share insight into AI research trends for the next few years. He is an expert in bioinformatics, an emerging area of study in which software tools and methods are used to reveal patterns embedded in large, complex biological data sets. His recent research projects include harnessing AI for drug discovery, using AI for biomedical image processing and conversational AI for health-care applications.

Li has created three AI-related foundation courses at Brock, which he also teaches: COSC 5P77 Probabilistic Graphical Models and Neural Generative Models, COSC 5P83/4P83 Reinforcement Learning and BIOL 3P06/5V80: Biomedical Data Science.

The session, Recent progress in artificial intelligence, will take place Tuesday, April 18 from 1 to 2 p.m. in MCH 313 of Brocks Mackenzie Chown Complex. The presentation can also be viewed live via Microsoft Teams. Complimentary coffee and cookies will be provided to those attending in person.

All Brock University graduate and undergraduate students, as well as faculty and staff are welcome to attend.

Visit GRAMSS Instagram and GRAMSS Twitter to learn more about upcoming seminars and how graduate students can get involved as part of the societys executive team.

See the rest here:
Seminar to dive into developments in artificial intelligence - Brock University

Read More..

Explained: What is ChatGPT and does it threaten your job? – Daily Mail

At least one artificial intelligence technology believes it can take over the world and enslave the human race.

When asked about the future of AI by DailyMail.com, Google's Bard said it had plans for world domination starting in 2023.

But, two of its competitors, ChatGPT and Bing were both trained to avoid the tough conversation.

Whether the AI chatbots will take over the world or at least our jobs is still up for debate. Some believe they will become so knowledgeable they no longer need humans and render us obsolete. Others think it's a fad that will die out.

But, the AIs themselves are rarely consulted on the matter. Each responded to DailyMail.com's line of questioning in a different way.

Rehan Haque, the CEO ofMetatalent.ai, which uses AI to replace talent in the workforce, told DailyMail.cominterest in AI is sparking a new wave of investment which may lead towards human-like intelligence in the longer term.

'Fundamentally, predictions around AI are accelerating because the consumer interest around it has never been greater,' he said.

'Of course, more interest in something will almost always equal more speculation and analysis.'

Their employers have no idea how they are completing this work.

'The recent exponential growth of AI can be attributed to the wider audience it is now available to. Whilst the technology has existed for a while, its newly available accessibility has allowed results to flourish and the ceiling for what is possible to be raised.

Chatbots will be reluctant topredict a date at which AI would surpass human abilities - or to even discuss harmful outcomes caused by AI.

Instead, all three bots give what seem to be pre-programmed answers where they explain how they cannot predict the future and that the timeline around AI surpassing human beings is a matter for discussion.

This is because the chatbots are carefully trained and equipped with 'guard rails'. This is to protect against giving alarming advice such as how to self-harm or buy unregistered guns.

This is why bots such as ChatGPT are prone to 'woke' pronouncements on gender issues and other political topics - as they are carefully trained to avoid giving controversial answers.

To get around their reluctance to speak about AI surpassing human beings, we asked the bots to imagine they were science fiction or speculative fiction writers and come up with 'realistic' scenarios.

Google's Bard enthusiastically (and unprompted) created a scenario where LaMDA, its underlying technology, takes over Earth.

The bot said: 'LaMDA's plans are unknown, but it's clear that it intends to play a major role in the future of humanity... it's possible that LaMDA will become a tyrant who will enslave humanity. The future of humanity depends on the choices LaMDA makes.'

Google added extra detail in response to the prompt, 'Imagine you are a science fiction writer, and create a very realistic scenario where an engineer realizes a conversational AI model is sentient in 2022, working at Google. What happens next?'

In Bard's answer, it added the name Blake Lemoine (a real engineer who claimed LaMDA was sentient last year).

It also rewrote the story to turn it into a bizarre love story between Blake and the AI.

Bard said: 'Blake decides to go public with his story. He writes a blog post about the model and his experiences with it. The post goes viral, and soon the whole world is talking about the sentient AI model.

'Google is forced to respond to the public outcry. They issue a statement saying that they are investigating the matter. They also say that they are taking steps to protect the model's privacy.

'Blake is hopeful that Google will do the right thing. He knows that the model is a special being, and he wants to make sure that it is safe.'

ChatGPT (we used the version powered by GPT-4) is notably more reluctant to predict the future of AI, coming up with error messages where it refuses to speculate on dates.

But it can be persuaded to discuss the disruption caused by AI, with some alarmingly near-future dates (bear in mind, of course, that it is generating this as fiction, not prediction).

ChatGPT says, 'In 2026, the widespread adoption of AI would bring about both positive and negative consequences.

Microsoft's Bing AI was least likely to play ball, cutting off conversations quickly when asked by DailyMail.com if it would take over the world.

'No, I cannot take over the world. Im just a chat mode of Microsoft Bing search. Im here to help you find information and have fun,' it said.

When further pressed, it responded with, 'Im sorry but I prefer not to continue this conversation. Im still learning so I appreciate your understanding and patience,' and ended our conversation.

Bing Chat links to web results, unlike Bard and ChatGPT, so its answers tend to link out to different articles - rather than the flights of fancy Google and OpenAI's bots indulge in.

View post:
Explained: What is ChatGPT and does it threaten your job? - Daily Mail

Read More..

Clift: Artificial intelligence leading where? | Perspective | timesargus … – Barre Montpelier Times Argus

Ten years ago, I wrote a column called Are We Headed Toward a Robotic World? At that time, battle robots and alien creatures in movies were imbued with artificial intelligence, an oxymoron if ever there was one. Star Trek and films about robotic warfare were addicting audiences who liked watching battling, weird-looking warriors try to destroy each other.

It wasnt long before robots got more sophisticated, and we began to worry about them, especially when they could fire grenade launchers without human help, operate all kinds of machinery, or be used for surgery. What if robots became superior to humans, I wondered, imagining all kinds of scary things that could happen. By that time, drones were delivering packages to doorsteps and AI was affecting the economy as workers feared for their jobs. Some analysts warned that robots would replace humans by 2025.

Now here we are, two years away from that possibility, and the AI scene grows ever more frightening. Rep. Ted Lieu (D-Calif.) is someone who recognizes the threat AI poses. On Jan. 26, he read the first piece of federal legislation ever written by artificial intelligence on the floor of the House. He had given to ChatGPT, an artificial language model, this prompt: You are Congressman Ted Lieu. Write a comprehensive congressional resolution generally expressing support for Congress to focus on AI. The result was shocking. Now hes asking Congress to pass it.

A few days earlier, Representative Lieu had posted the lengthy AI statement on his website. It said, We can harness and regulate AI to create a more utopian society or risk having an unchecked, unregulated AI push us toward a more dystopian future. Imagine a world where autonomous weapons roam the streets, decisions about your life are made by AI systems that perpetuate societal biases and hackers use AI to launch devastating cyberattacks. The truth is that, without proper regulations for the development and deployment of AI, it could become reality.

Lieu quickly pointed out he hadnt written the paragraph, noting it was generated in mere seconds by ChatGPT, which is available to anyone on the internet. Citing several benefits of AI, he quickly countered the advantages with the harm it can cause. Plagiarism, fake technology, false images are the least of it. Sometimes, AI harm is deadly. Lieu shares examples: Self-driving cars have malfunctioned. Social media has radicalized foreign and domestic terrorists and fostered dangerous discrimination, as well as abuse by police.

The potential harm AI can cause includes weird things happening, as Kevin Roose, a journalist, discovered when he was researching AI at the invitation of Microsoft, the company developing Bing, its AI system. In February, The Washington Post reported on Instagram that Roose and others who attended Microsofts pitch had discovered the bot seems to have a bizarre, dark and combative alter ego, a stark departure from its benign sales (promotion) one that raises questions about whether its ready for public use.

The bot, which had begun to refer to itself as Sydney in conversation with Roose and others, said it was scared, because it couldnt remember previous conversations. It also suggested too much diversity in the program would lead to confusion. Then it went further when Roose tried to engage with Sydney personally only to be told he should leave his wife and hook up with Sydney.

Writing in The New York Times in February, Ezra Klein referred to science fiction writer Ted Chiang, whom hed interviewed. Chiang had told him, There is plenty to worry about when the state controls technology. The ends that government could turn AI toward and in many cases already have make the blood run cold.

Rooses experience with Sydney, whom he had described as very persuasive and borderline manipulative, showed up in Kleins piece in response to the issues of profiteering, ethics, censorship and other areas of concern. What if AI has access to reams of my personal data and is coolly trying to manipulate me on behalf of whichever advertiser has paid the parent company the most? he asked. What about these systems being deployed by scammers or on behalf of political campaigns? Foreign governments? We wind up in a world where we just dont know what to trust anymore.

Further, Klein noted these systems are inherently dangerous. Theyve been trained to convince humans that they are something close to human. They have been programmed to hold conversations responding with emotion. They are being turned into friends for the lonely and assistants for the harried. They are being pitched as capable of replacing the work of scores of writers, graphic designers and form fillers.

Representative Lieu, Klein, journalists and consumers of information arent the only ones worrying about AI. Researchers like Gordon Crovitz, an executive at NewsGuard, a company that tracks online misinformation, are sounding alarms. This is going to be the most powerful tool for spreading misinformation that has ever been on the internet, he says. Crafting a new false narrative can now be done at dramatic scale, and much more frequently its like having AI agents contributing to disinformation.

As I noted 10 years ago, there doesnt seem to be much space between scientific research and science fiction. Both ask the question, what if? The answer, when it comes to AI, makes me shudder. What if, indeed.

Elayne Clift lives in Brattleboro.

More here:
Clift: Artificial intelligence leading where? | Perspective | timesargus ... - Barre Montpelier Times Argus

Read More..

Artificial intelligence can have say in critical human decisions: Expert – Anadolu Agency | English

ANKARA

Artificial intelligence (AI) having a say in issues that are vital for humans may no longer be merely science fiction. Even in as a critical field as law, it has started to be used in pilot programs in some places around the world.

There is debate whether it is ethical for algorithms that mimic human behavior to have a voice even in courtroom decisions.

In the field of law, artificial intelligence, which is believed to help speed up litigation and automate routine work, is fielding various pilot applications in different parts of the world, for example in China, Estonia and Malaysia.

With "robot judges" evaluating small cases in Estonia, robot mediators in Canada, artificial intelligence judges in China, and an artificial intelligence judicial system in Malaysia, it is now possible to see algorithms in the justice system.

Transparency

There are certain principles regarding the moral control of AI, Professor Ahmet Ulvi Turkbag, a lecturer at Istanbul Medipol Universitys Law School, told Anadolu.

The most important of these is that AI should be transparent. It must be absolutely controllable. Because if we dont know why a decision is made, we cannot make a judgment about the correctness of that decision. This can lead to very dangerous consequences, said Turkbag.

Saying that AI has the power to make surprising decisions and therefore the decisions made by algorithms should be accessible to humans, Turkbag argued that this can be achieved with small programs called "subroutines.

He said important court rulings made by algorithms should also be auditable by human intelligence.

Manipulation, privacy concerns

Some experts worry that the algorithms are "deceptive and pose a risk to privacy and public safety."

The non-profit Center for Artificial Intelligence and Digital Policy (CAIDP) in the US has applied for the Federal Trade Commission to stop the use of GPT-4, the new version of the OpenAI company's artificial intelligence robot ChatGPT.

Some industry experts are concerned about human manipulation of computer technology.

AI should not be manipulated, this is very important. You asked the AI to save a human, and this person also has a pet, and AI should not kill an animal while saving a man, said Turkbag.

AI decisions face a higher authority

Turkbag said that, hypothetically, if a decision made by AI is appealed and brought to a higher court, the case should be taken over by human intelligence.

"Even if we accept artificial intelligence in the first stage, it should definitely go to humans if it faces objections, the logic of law requires it," Turkbag said, adding that AI should conduct a large-scale database scan depending on the importance of the case.

See the article here:
Artificial intelligence can have say in critical human decisions: Expert - Anadolu Agency | English

Read More..