Page 1,241«..1020..1,2401,2411,2421,243..1,2501,260..»

Why Ethereum Classic Is Trading Lower – Benzinga

June 5, 2023 2:57 PM | 1 min read

Ethereum Classic(CRYPTO: ETC)is trading lower by 7.30% to $16.67 Monday afternoon amid broader weakness across cryptocurrencies.Shares of several cryptocurrencies are trading lower after the SEC sued Binance over U.S. securities violations.

As reported by our Benzinga team,Binance Holdings Ltd. and its CEO Changpeng Zhao are facing a lawsuit filed by the Securities and Exchange Commission (SEC), citing alleged violations of U.S. regulations, as indicated in a recent legal filing by the SEC.

Massive returns are possible within this market! For a limited time, get access to the Benzinga Insider Report, usually $47/month, for just $0.99! Discover extremely undervalued stock picks before they skyrocket! Time is running out! Act fast and secure your future wealth at this unbelievable discount! Claim Your $0.99 Offer NOW!

Advertorial

Enter your email and you'll also get Benzinga's ultimate morning update AND a free $30 gift card and more!

See Also:Palo Alto Networks Shares Surge As It Joins S&P 500

The legal proceedings allege that Zhao, as the driving force behind Binance, directed employees to use VPNs to conceal their geographical locations.

Binanceis the largest cryptocurrency exchange in the world by trading volume. In fact, it tops the ranks of cryptocurrency exchanges maintained by CoinMarketCap, which are ranked based on 24-hour trading volumes, exchange score and average liquidity. Binance triumphs over exchanges like Huobi Global and Coinbase, which follow closely in 2nd and 3rd places, respectively.

2023 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.

See the original post:

Why Ethereum Classic Is Trading Lower - Benzinga

Read More..

Trons TRX now live on Ethereum via BitTorrent bridge, boosting DeFi access – CryptoSlate

What is CryptoSlate Alpha?

A web3 membership designed to empower you with cutting-edge insights and knowledge. Learn more

Welcome! You are connected to CryptoSlate Alpha. To manage your wallet connection, click the button below.

If you don't have enough, buy ACS on the following exchanges:

Access Protocol is a web3 monetization paywall. When users stake ACS, they can access paywalled content. Learn more

Disclaimer: By choosing to lock your ACS tokens with CryptoSlate, you accept and recognize that you will be bound by the terms and conditions of your third-party digital wallet provider, as well as any applicable terms and conditions of the Access Foundation. CryptoSlate shall have no responsibility or liability with regard to the provision, access, use, locking, security, integrity, value, or legal status of your ACS Tokens or your digital wallet, including any losses associated with your ACS tokens. It is solely your responsibility to assume the risks associated with locking your ACS tokens with CryptoSlate. For more information, visit our terms page.

Read more:

Trons TRX now live on Ethereum via BitTorrent bridge, boosting DeFi access - CryptoSlate

Read More..

New Dartmouth Center Applies AI to Improve Health Outcomes – Dartmouth News

Dartmouth has created a Center for Precision Health and Artificial Intelligence to spur interdisciplinary research that can better leverageas well as more safely and ethically deploybiomedical data in assessing and treating patients and improving their health care outcomes.

The center is being launched with initial funding of $2 million from the Geisel School of Medicine and Dartmouth Cancer Center and is based in the Williamson Translational Research Building, a Dartmouth-owned building on the Dartmouth Hitchcock Medical Center campus in Lebanon, N.H.

Artificial intelligence is poised to play a transformative role in health care by delivering rapid and innovative solutions to real-world clinical challenges, improving patient outcomes, and creating better and more equitable access for all, says President Philip J. Hanlon 77.

This new center will help foster innovation and collaboration in these critically important fields.

Precision health is a holistic approach that aims to personalize health care by tailoring treatments and disease prevention strategies to a persons unique biologytheir genes, medical history, lifestyle, and environment.

A wealth of biomedical data can be gathered through genomic sequencing, molecular testing, imaging techniques, and wearable monitoring devices, all of which have become more advanced, affordable, and broadly available over the past decade.

Quote

It is truly a Dartmouth center with leaders and advisers from across the institution connecting clinicians and AI scientists.

Attribution

Duane Compton, dean of the Geisel School of Medicine

AI holds the key to extracting valuable insight from this deluge of data because it can sift through and analyze complex and heterogeneous information to identify trends and patterns and extract digital biomarkers that can guide clinically actionable decisions.

Machine learning models trained on a host of different data sets can predict disease risk, enhance the accuracy of diagnoses, anticipate the course of an illness, and tailor treatment options best suited to the patient.

CPHAI will be governed by the dean of Geisel and advised by a committee that will have representatives and stakeholders from Geisel, Dartmouth Cancer Center, Thayer School of Engineering, Arts and Sciences, and Dartmouth Health.

It is truly a Dartmouth center with leaders and advisers from across the institution connecting clinicians and AI scientists, says Geisel Dean Duane Compton.

By harnessing the power of AI and machine learning, CPHAI aims to create a toolbox of digital technologies that will empower providers to identify and deliver the most effective health care strategy for each patient.

Researchers will work on projects such as developing AI-driven diagnostic tools, optimizing treatment strategies, and analyzing biomedical data to inform public health policies.

AI models created through collaborations with radiologists and pathologists will be able to draw precise and complex inferences directly from medical images that complement the knowledge and experience of human imaging professionals and make diagnoses more reliable and efficient, reducing potential diagnostic errors.

The center will also enable researchers to evaluate new digital tools they develop in clinical settings, paving the way for creating and building applications that can be integrated into health care systems after seeking FDA approval.

What makes CPHAI unique is its interdisciplinary and comprehensive approach to precision health and artificial intelligence, focusing not only on technological advancements but also on ethical and societal implications, says Saeed Hassanpour, associate professor of biomedical data science, epidemiology, and computer science, who serves as the centers inaugural director.

Saeed Hassanpour stands outside the Williamson Translational Research Building, where the new center is based, on the campus of Dartmouth Hitchcock Medical Center in Lebanon, N.H. (Photo by Katie Lenhart)

The center, which will collaborate with the Dartmouth Ethics Institute, Neukom Institute for Computational Science, and the Wright Center for the Study of Computation and Just Communities, is committed to ensuring the ethical use of AI and fostering diversity and inclusion in the field, says Hassanpour. This commitment will help identify the limitations of AI, address issues related to biases in AI algorithms and datasets, improve transparency and privacy, and ensure equitable outcomes for all individuals, regardless of their background.

CPHAI will also create new educational and training opportunities, attracting students and professionals interested in pursuing careers in AI and precision health, says Hassanpour. These opportunities will help develop a skilled workforce in the Upper Valley region, making it an attractive destination for technology and health care companies.

Medical residents, postdocs, and studentsboth graduate and undergraduateinterested in working with artificial intelligence will also find unique opportunities for learning and research, Compton says.

We want every individual to reach their optimal health, which means both prevention and care medicine must come together. Precision health is a broader application than precision medicine, says Compton.

The new Dartmouth center has been in the works for several years and comes as the market for AI in health care is expected to grow tremendouslyfrom just under $5 billion in 2020 to more than $45 billion in 2026.

Many of us at Dartmouth have been working in AI in the last several years. We have the talent, skills, experience, and material to develop and implement innovative AI-driven diagnostic tools, says Arief Suriawinata, chair of pathology and laboratory medicine and a member of the centers advisory committee. The formation of CPHAI will foster intercampus and interdisciplinary collaborations, attract and retain top talent, and secure additional funding for our concerted efforts.

The technologies developed at CPHAI will help pathologists in triaging and screening cases, improve the diagnostic standard and quality, and optimize workflow, he says.

Another member of the advisory committee, Jocelyn Chertoff, chair of radiology, also sees great promise for patients from the centers work.

From management of administrative clinical tasks to computer-aided detection of cancers, radiologists are already using AI, Chertoff says. Tools based on deep learning algorithms promise to transform the practice by helping radiologists better interpret images, make the process of producing images from scanners more accurate and efficient, and improve a hospitals overall workflow so that patients get the most timely care.

Also on the advisory committee are Steven Leach, director of the Dartmouth Cancer Center, Elizabeth F. Smith, dean of the Faculty of Arts and Sciences, Steven Bernstein, Dartmouth Health chief research officer, Michael Whitfield, chair of biomedical data science, and Charles Thomas Jr. 79, chief of radiation oncology.

Hassanpour expects that the center will actively engage with local and global communities to ensure their perspectives, concerns, and needs are considered in the development and application of AI technologies. This engagement will serve to build trust and awareness about the benefits and potential risks of AI in health care.

Overall, CPHAIs presence in our region could lead to significant advancements in health care, education, and economic development, positioning the area as a leader in AI and precision health research, he says.

**

An FAQ on the CPHAI is also available.

More here:
New Dartmouth Center Applies AI to Improve Health Outcomes - Dartmouth News

Read More..

Not just a fad: Firm launches fund designed to capitalize on A.I. boom – CNBC

A major ETF firm provider is betting the artificial intelligence boom is just starting.

Roundhill Investments launched the Generative AI & Technology ETF (CHAT) less than 20 days ago. It's the first-ever exchange-traded fund designed to track companies involved in generative AI and other related technologies.

"These companies, we believe, are not just a fad. They're powering something that could be as ubiquitous as the internet itself," the firm's chief strategy officer, Dave Mazza, told "ETF Edge" this week. "We're not talking about hopes and dreams [or] some theme or fad that could happen 30 years in the future which may change the world."

Mazza notes the fund includes not just pure play AI companies like C3.ai but also large-cap tech companies such as Microsoft and AI chipmaker Nvidia.

Nvidia is the fund's top holding at 8%, according to the company website. Its shares are up almost 42% over the past two months. Since the beginning of the year, Nvidia stock has soared 169%.

"This [AI] is an area that's going to get a lot of attention," said Mazza.

His bullish forecast comes amid concerns AI is a price bubble that will pop and take down the Big Tech rally.

In a recent interview on CNBC's "Fast Money," Richard Bernstein Advisors' Dan Suzuki a Big Tech bear since June 2021 compared the AI rally to the dot-com bubble in the late 1990s.

"People jump from narrative to narrative," the firm's deputy chief investment officer said on Wednesday. "I love the technology. I think the applications will be huge. That doesn't mean it's a good investment."

The CHAT ETF is up more than 8% since it started trading on May 18.

The rest is here:
Not just a fad: Firm launches fund designed to capitalize on A.I. boom - CNBC

Read More..

New superconducting diode could improve performance of quantum computers and artificial intelligence – Phys.org

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

proofread

A University of Minnesota Twin Cities-led team has developed a new superconducting diode, a key component in electronic devices, that could help scale up quantum computers for industry use and improve the performance of artificial intelligence systems. Compared to other superconducting diodes, the researchers' device is more energy efficient; can process multiple electrical signals at a time; and contains a series of gates to control the flow of energy, a feature that has never before been integrated into a superconducting diode.

The paper is published in Nature Communications.

A diode allows current to flow one way but not the other in an electrical circuit. It's essentially half of a transistor, the main element in computer chips. Diodes are typically made with semiconductors, but researchers are interested in making them with superconductors, which have the ability to transfer energy without losing any power along the way.

"We want to make computers more powerful, but there are some hard limits we are going to hit soon with our current materials and fabrication methods," said Vlad Pribiag, senior author of the paper and an associate professor in the University of Minnesota School of Physics and Astronomy. "We need new ways to develop computers, and one of the biggest challenges for increasing computing power right now is that they dissipate so much energy. So, we're thinking of ways that superconducting technologies might help with that."

The University of Minnesota researchers created the device using three Josephson junctions, which are made by sandwiching pieces of non-superconducting material between superconductors. In this case, the researchers connected the superconductors with layers of semiconductors. The device's unique design allows the researchers to use voltage to control the behavior of the device.

Their device also has the ability to process multiple signal inputs, whereas typical diodes can only handle one input and one output. This feature could have applications in neuromorphic computing, a method of engineering electrical circuits to mimic the way neurons function in the brain to enhance the performance of artificial intelligence systems.

"The device we've made has close to the highest energy efficiency that has ever been shown, and for the first time, we've shown that you can add gates and apply electric fields to tune this effect," explained Mohit Gupta, first author of the paper and a Ph.D. student in the University of Minnesota School of Physics and Astronomy. "Other researchers have made superconducting devices before, but the materials they've used have been very difficult to fabricate. Our design uses materials that are more industry-friendly and deliver new functionalities."

The method the researchers used can, in principle, be used with any type of superconductor, making it more versatile and easier to use than other techniques in the field. Because of these qualities, their device is more compatible for industry applications and could help scale up the development of quantum computers for wider use.

"Right now, all the quantum computing machines out there are very basic relative to the needs of real-world applications," Pribiag said. "Scaling up is necessary in order to have a computer that's powerful enough to tackle useful, complex problems. A lot of people are researching algorithms and usage cases for computers or AI machines that could potentially outperform classical computers. Here, we're developing the hardware that could enable quantum computers to implement these algorithms. This shows the power of universities seeding these ideas that eventually make their way to industry and are integrated into practical machines."

In addition to Pribiag and Gupta, the research team included University of Minnesota School of Physics and Astronomy graduate student Gino Graziano and University of California, Santa Barbara researchers Mihir Pendharkar, Jason Dong, Connor Dempsey, and Chris Palmstrm.

More information: Mohit Gupta et al, Gate-tunable superconducting diode effect in a three-terminal Josephson device, Nature Communications (2023). DOI: 10.1038/s41467-023-38856-0

Journal information: Nature Communications

Read the original post:
New superconducting diode could improve performance of quantum computers and artificial intelligence - Phys.org

Read More..

‘Artificial intelligence is the defining technology of our time’ – SWI swissinfo.ch in English

Catrin Hinkel, CEO of Microsoft Switzerland, is convinced that artificial intelligence (AI) is the next big step in how we interact with IT. However, this new technology will be a kind of co-pilot and will not replace human intelligence, she tells SWI swissinfo.ch at Microsofts headquarters in Zurich.

This content was published on June 6, 2023June 6, 2023

SWI swissinfo.ch: You arrived in Zurich from your native Germany in 2021 to head Microsofts Swiss subsidiary. What surprised you most?

Catrin Hinkel: I was very impressed by the level of innovation and creativity in Switzerland and at Microsoft. The Swiss people have a long history of innovation and the team at Microsoft Switzerland is passionate about creating new and innovative solutions. I was also amazed by the strong cooperation between Microsoft and its partners in Switzerland.

Catrin Hinkel was born in Germany in 1969. After completing bilingual business studies at the University of Reutlingen in 1992, she worked for the global consulting firm Accenture. There she held a number of leadership roles, including that of Senior Managing Director for Cloud First Strategy and Consulting in Europe. She has been the CEO of Microsoft Switzerland since May 2021.

SWI: Microsoft employs over 1,000 people in Switzerland. What are the subsidiarys main tasks?

C.H.: As the CEO of Microsoft Switzerland, Im responsible for the 600-strong Swiss team, which is in charge of marketing and sales in Switzerland. We work closely with our customers to support them in their digital journeys. In addition, Microsoft employs a further 400 people in Switzerland who are part of the international team.

SWI: What is the role of Microsofts international team in Switzerland?

C.H.: The members of this team are attached to the various technology units in the Microsoft group and contribute to the development of new products at the international level. Both the Swiss team and Swiss customers benefit from the expertise of this international team, particularly in the fields of mixed and augmented reality.

SWI: Some companies such as Google, Amazon, Twitter and Microsoft have recently cut their workforce worldwide. What about Microsoft in Switzerland?

C.H.: Were not able to provide detailed figures. However, as a company operating in highly competitive and dynamic technology markets, were obliged to adapt flexibly in order to meet our customers requirements. This is the norm in our market. Were therefore taking on new recruits in areas where were expanding and where we see a future; meanwhile, in sectors where our growth is weaker, were positioning ourselves accordingly so as to remain agile.

SWI: To what extent are you affected by the shortage of IT specialists in Switzerland?

C.H.: The shortage of specialists is a serious problem, both in Switzerland and abroad, not just for Microsoft but also for our customers and our partners. To help solve the problem, we launched the Skills for Switzerland initiative in 2020. This has enabled us to boost the digital skills of more than 630,000 people in Switzerland. Organisations such as the human resource company Adecco, based in Zurich, and the CyberPeace Institute, in Geneva, are also taking part in this scheme. Whats more, we are working on other projects with the association digitalswitzerland and retailer Migros.

SWI: The cloud market is booming and, according to the International Data Corporation (IDC), should exceed $11 billion (CHF10 billion) in Switzerland by 2026. How do you explain this growth?

C.H.: Thanks to the cloud services of a company like Microsoft, our customers can outsource their data processing and benefit from huge economies of scale and skills. In concrete terms, thanks to the cloud, a very large number of Swiss companies of all sizes have access to new technologies such as artificial intelligence at competitive costs. This means that companies can innovate as they wish; so, ultimately, it can be said that the cloud fuels innovation.

SWI: The fact that your client companies data is sometimes stored abroad is a source of concern.

C.H.: Microsoft is a global company that serves both a local and international clientele, so we strive to provide our customers with the most appropriate solutions. In Switzerland, thanks to the presence of our four data centres, we can offer sound local solutions. This local offering has also enabled us to win the trust of Swiss companies that are subject to stringent local requirements. Im thinking, for example, of banks of all sizes, which are highly regulated and supervised by the Swiss Financial Market Supervisory Authority (FINMA).

SWI: Nevertheless, some Swiss members of parliament are concerned that you may have access to sensitive customer data. How do you assuage their fears?

C.H.: With cloud services, we provide our customers with technological platforms. Were not at all interested in the data on these platforms. Its totally out of the question for us to use this data or pass it on to other companies. Whats more, on our platforms, our customers data is protected by encryption. What interests us, ultimately, is the democratisation of new technologies.

SWI: Whats your view on technological developments such as blockchain, the metaverse and AI?

C.H.: When used properly, technology can make peoples lives simpler, more efficient and more enjoyable, especially when it comes to carrying out routine tasks. Nevertheless, technology will always remain an aid, a kind of co-pilot, and will never replace real men and women.

As for AI, its the defining technology of our time. Its also the next big step forward in the way we interact with IT. In a world that is increasingly complex economically, AI has the power to revolutionise many types of jobs.

SWI: What are your main AI applications?

C.H.: Our investment in AI spans our entire business, from Teams and Outlook to Bing and Xbox. Were already seeing considerable interest from our customers in Switzerland and are actively working on value cases. For example, our Copilot application can be used to quickly extract basic data from a 300-page annual report.

SWI: AI raises numerous ethical issues. Several countries are enacting laws to regulate its use.

C.H.: This is precisely why in 2018 Microsoft defined a series of ethical principles applicable to all our uses of AI. For instance, we exclude all bias based on race. We also rule out any applications that are not yet completely reliable and which, in case of malfunction, could harm individuals; Im thinking of facial recognition, for example.

Edited by Samuel Jaberg. Translated from French by Julia Bassam.

In compliance with the JTI standards

More: SWI swissinfo.ch certified by the Journalism Trust Initiative

View original post here:
'Artificial intelligence is the defining technology of our time' - SWI swissinfo.ch in English

Read More..

Artificial intelligence helps doctors with new ways of detection … – WKRC TV Cincinnati

CINCINNATI (WKRC) - The next generation of medicine is now in use in the Tri-State. Artificial intelligence is increasing the odds that doctors wont miss what could be critical to a patients survival. Its improving doctor-patient care.

Artificial intelligence, or AI, has been used for years in medicine, but lately it has helped in detection and discovery in new ways that can save patients lives.

It was just shocking because I have no risk factors other than being female, said Jenny Dermody, who is a breast cancer survivor.

Shes cancer-free now. But when Dermody recently went for her annual mammogram, she admited she got quite a scare.

I did my mammogram because it was something that I always do for my wellness checkup, but it never crossed my mind that I was going to be a cancer patient, Dermody said.

Part of what helped to make her diagnosis is the next generation of medicine.

There's been studies. There's a range, but for the average radiologist, the sensitivity for this has increased somewhere between five and six percent, said Dr. Anthony Antonoplos, a TriHealth radiologist. So, the system that we use is something called ProFound AI. Its an AI algorithm that runs concurrently in the background and assists the radiologist in interpreting the 3D, or the tomosynthesis portion, of the screening mammogram.

The AI analyzes each image as its brought up. Markings are automatically generated where something appears out of the ordinary, therefore alerting the radiologist to pay special attention to those areas on that exam, Dr. Antonoplos said.

He said the AI program increases the number of accurate call-backs, by decreasing the ones that turn out not to be necessary.

When we can decrease those call backs that generate anxiety, extra expense, appointments. People have very busy lives. When we can decrease that as well, thats a win-win, Dr. Antonoplos said.

He also said the next version of this AI will take it one step further. It will allow radiologists to compare previous mammograms with an algorithm that would help detect changes.

Go here to read the rest:
Artificial intelligence helps doctors with new ways of detection ... - WKRC TV Cincinnati

Read More..

AI poses national security threat, warns terror watchdog – The Guardian

Artificial intelligence (AI)

Security services fear the new technology could be used to groom vulnerable people

The creators of artificial intelligence need to abandon their tech utopian mindset, according to the terror watchdog, amid fears that the new technology could be used to groom vulnerable individuals.

Jonathan Hall KC, whose role is to review the adequacy of terrorism legislation, said the national security threat from AI was becoming ever more apparent and the technology needed to be designed with the intentions of terrorists firmly in mind.

He said too much AI development focused on the potential positives of the technology while neglecting to consider how terrorists might use it to carry out attacks.

They need to have some horrible little 15-year-old neo-Nazi in the room with them, working out what they might do. Youve got to hardwire the defences against what you know people will do with it, said Hall.

The governments independent reviewer of terrorism legislation admitted he was increasingly concerned by the scope for artificial intelligence chatbots to persuade vulnerable or neurodivergent individuals to launch terrorist attacks.

What worries me is the suggestibility of humans when immersed in this world and the computer is off the hook. Use of language, in the context of national security, matters because ultimately language persuades people to do things.

The security services are understood to be particularly concerned with the ability of AI chatbots to groom children, who are already a growing part of MI5s terror caseload.

As calls grow for regulation of the technology following warnings last week from AI pioneers that it could threaten the survival of the human race, it is expected that the prime minister, Rishi Sunak, will raise the issue when he travels to the US on Wednesday to meet President Biden and senior congressional figures.

Back in the UK, efforts are intensifying to confront national security challenges posed by AI with a partnership between MI5 and the Alan Turing Institute, the national body for data science and artificial intelligence, leading the way.

Alexander Blanchard, a digital ethics research fellow in the institutes defence and security programme, said its work with the security services indicated the UK was treating the security challenges presented by AI extremely seriously.

Theres a lot of a willingness among defence and security policy makers to understand whats going on, how actors could be using AI, what the threats are.

There really is a sense of a need to keep abreast of whats going on. Theres work on understanding what the risks are, what the long-term risks are [and] what the risks are for next-generation technology.

Last week, Sunak said that Britain wanted to become a global centre for AI and its regulation, insisting it could deliver massive benefits to the economy and society. Both Blanchard and Hall say the central issue is how humans retain cognitive autonomy control over AI and how this control is built into the technology.

The potential for vulnerable individuals alone in their bedrooms to be quickly groomed by AI is increasingly evident, says Hall.

On Friday, Matthew King, 19, was jailed for life for plotting a terror attack, with experts noting the speed at which he had been radicalised after watching extremist material online.

Hall said tech companies need to learn from the errors of past complacency social media has been a key platform for exchanging terrorist content in the past.

Greater transparency from the firms behind AI technology was also needed, Hall added, primarily around how many staff and moderators they employed.

We need absolute clarity about how many people are working on these things and their moderation, he said. How many are actually involved when they say theyve got guardrails in place? Who is checking the guardrails? If youve got a two-man company, how much time are they devoting to public safety? Probably little or nothing.

New laws to tackle the terrorism threat from AI might also be required, said Hall, to curb the growing danger of lethal autonomous weapons devices that use AI to select their targets.

Hall said: Youre talking about [This is] a type of terrorist who wants deniability, who wants to be able to fly and forget. They can literally throw a drone into the air and drive away. No one knows what its artificial intelligence is going to decide. It might just dive-bomb a crowd, for example. Do our criminal laws capture that sort of behaviour? Generally terrorism is about intent; intent by human rather than intent by machine.

Lethal autonomous weaponry or loitering munitions have already been seen on the battlefields of Ukraine, raising morality questions over the implications of the airborne autonomous killing machine.

AI can learn and adapt, interacting with the environment and upgrading its behaviour, Blanchard said.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Read more:
AI poses national security threat, warns terror watchdog - The Guardian

Read More..

AI should be licensed like medicines or nuclear power, Labour suggests – The Guardian

Artificial intelligence (AI)

Exclusive: party calls for developers without a licence to be barred from working on advanced AI tools

The UK should bar technology developers from working on advanced artificial intelligence tools unless they have a licence to do so, Labour has said.

Ministers should introduce much stricter rules around companies training their AI products on vast datasets of the kind used by OpenAI to build ChatGPT, Lucy Powell, Labours digital spokesperson, told the Guardian.

Her comments come amid a rethink at the top of government over how to regulate the fast-moving world of AI, with the prime minister, Rishi Sunak, acknowledging it could pose an existential threat to humanity.

One of the governments advisers on artificial intelligence also said on Monday that humanity could have only two years before AI is able to outwit people, the latest in a series of stark warnings about the threat posed by the fast-developing technology.

Powell said: My real point of concern is the lack of any regulation of the large language models that can then be applied across a range of AI tools, whether thats governing how they are built, how they are managed or how they are controlled.

She suggested AI should be licensed in a similar way to medicines or nuclear power, both of which are governed by arms-length governmental bodies. That is the kind of model we should be thinking about, where you have to have a licence in order to build these models, she said. These seem to me to be the good examples of how this can be done.

The UK government published a white paper on AI two months ago, which detailed the opportunities the technology could bring, but said relatively little about how to regulate it.

Since then, a range of developments, including advances in ChatGPT and a series of stark warnings from industry insiders, have caused a rethink at the top of government, with ministers now hastily updating their approach. This week Sunak will travel to Washington DC, where he will argue that the UK should be at the forefront of international efforts to write a new set of guidelines to govern the industry.

Labour is also rushing to finalise its own policies on advanced technology. Powell, who will give a speech to industry insiders at the TechUK conference in London on 6 June, said she believed the disruption to the UK economy could be as drastic as the deindustrialisation of the 1970s and 1980s.

Keir Starmer, the Labour leader, is expected to give a speech on the subject during London Tech Week next week. Starmer will hold a shadow cabinet meeting in one of Googles UK offices next week, giving shadow ministers a chance to speak to some of the companys top AI executives.

Powell said that rather than banning certain technologies, as the EU has done with tools such as facial recognition, she thought the UK should focus on regulating the way in which they are developed.

Products such as ChatGPT are built by training algorithms on vast banks of digital information. But experts warn that if those datasets contain biased or discriminatory data, the products themselves can show evidence of those biases. This could have a knock-on effect, for example, on employment practices if AI tools are used to help make hiring and firing decisions.

Powell said: Bias, discrimination, surveillance this technology can have a lot of unintended consequences.

She argued that by forcing developers to be more open about the data they are using, governments could help mitigate those risks. This technology is moving so fast that it needs an active, interventionist government approach, rather than a laissez-faire one.

Matt Clifford, the chair of the Advanced Research and Invention Agency, which the government set up last year, said on Monday that AI was evolving much faster than most people realised. He said it could already be used to launch bioweapons or large-scale cyber-attacks, adding that humans could rapidly be surpassed by the technology they had created.

Speaking to TalkTVs Tom Newton Dunn, Clifford said: Its certainly true that if we try and create artificial intelligence that is more intelligent than humans and we dont know how to control it, then thats going to create a potential for all sorts of risks now and in the future. So I think theres lots of different scenarios to worry about but I certainly think its right that it should be very high on the policymakers agendas.

Asked when that could happen, he added: No one knows. There are a very broad range of predictions among AI experts. I think two years will be at the very most sort of bullish end of the spectrum.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

See the original post:
AI should be licensed like medicines or nuclear power, Labour suggests - The Guardian

Read More..

Marr: Artificial intelligence is a clever monkey that we need to be worried about – LBC

6 June 2023, 18:12 | Updated: 6 June 2023, 18:48

Speaking at the start of Tonight With Andrew Marr, he said he had spoken to a former defence secretary who compared AI systems to psychopaths.

And Andrew tried to sum up just why artificial intelligence is a topic of concern.

He said: "Unless you've spent the last few weeks upside down in a wheelie bin you have heard a lot about AI recently - about how it's going to destroy our civilisation, or save it, or something or other, but you've gathered that it's very, very important.

"Rishi Sunak is popping over to see Joe Biden in Washington this week to talk about regulating AI. But if you're wondering: ok, but what, really, is artificial intelligence? What are they yattering on about? You're not alone.

"It's a form of computerised intelligence that learns stuff by itself. But as with a lot of complicated things, what we really need is a metaphor.

Listen and subscribe to Unprecedented: Inside Downing Street on Global Player

"So... AI is an enormous, very clever monkey. You invite it into your home and you teach it how to make breakfast, wash the clothes, clean the carpets and even look after all the boring stuff in your inbox.

"Think of that wonderful free time the Clever Monkey gives you. It's been told to look after your happiness and it's doing really well.

"Then the monkey decides that your toast and marmalade for breakfast is bad for your health so it starts to give you muesli. It thinks you look shabby in your much loved old breeks and jacket and so it quietly bins them.

"Pretty soon it looks as if Clever Monkey, still friendly, still looking out for your interests, is in fact in charge. Clever Monkey realises you don't much like your next door neighbour - so Clever Monkey pops over the hedge, breaks his jaw and sets fire to his living room.

"You're looking a bit worried so clever monkey mashes up some opioid drugs he's bought down the canal and feeds them to you in your evening cocoa.

Watch Tonight with Andrew Marr exclusively on Global Player every Monday to Thursday from 6pm to 7pm

"Friendly Clever Monkey realises you're a little lazy as well and so quite soon, he's doing your job - almost whatever it is - much better than you ever did.

"By now, of course, he's been through your inbox, found a way to dodge paying your taxes, and transferred all your savings into the Friendly Monkey peanuts and banana account - because he also knows that for you to be happy, Friendly Monkey must be happy as well.

"Now you may think that's just a silly story but it's my best go at trying to explain why artificial intelligence is something we need to worry about.

You can also listen to the podcast Tonight with Andrew Marr only on Global Player.

"Have you invited and the monkey into your home? Well, pretty soon he's in your smartphone, your TV, your computer. He's at your workplace.

"He's bringing lots of stuff into your social media. So frankly, yes, the Clever Monkey is already in your home.

"I bumped into Lord Reid, John Reid, the former Labour Defence secretary and home secretary in the street a couple of hours ago, And he's been thinking about this as well and he told me: 'What we're doing is creating an intelligence which is far smarter than we are except in one thing - because it's a machine it has no empathy.'

"And what do we call a very smart operator with no empathy, he asked? We call it a psychopath."

See the original post:
Marr: Artificial intelligence is a clever monkey that we need to be worried about - LBC

Read More..