Category Archives: Deep Mind
Air Fryer vs Deep Fat Fryer: fried-and-tested by experts – Homes & Gardens
Whether youre looking to make fast, fluffy fries or quick crispy bacon, both air fryers and deep fat fryers are great options. The deep fat fryer is a classic, delivering on familiar taste and texture. Air fryers are becoming increasingly popular, establishing themselves as a kitchen staple.
After extensive research and testing, our expert team has the professional advice to guide you to an informed decision. We've tested the best air fryers on the market. After comparing these products to a classic deep fat fryer, we can give a fair verdict on which youll want to have in your kitchen.
When it comes down to it, an air fryer is better than a deep fat fryer. However, there's a lot to consider before you choose. We've compared both appliances on price, space, and taste to tell you what you need to know before you buy.
Today's best air fryer and deep far fryer deals
(Image credit: GettyImages)
Deep fat fryers heat oil to high temperatures. Once the oil is hot, you plunge your food into the oil, turning it to get an even fry. The cooking itself is quick, but make sure to account for time to heat and cool the oil before and after. Bear in mind that youll need to stay by the fryer the whole time that your food is cooking, too.
Air fryers work with little to no oil. They are smaller machines which rapidly circulate hot air around a basket container to cook your food. The cooking takes a little longer, but it can produce results which have a comparable taste and a similar texture. You wont need to stay near the appliance, because they often have paddles to keep food moving while it cooks. If they dont, the most itll need is a shake or mix half way through.
(Image credit: Future / Alex David)
Results
WINNER: It's a tie
To start, air fryers will only cook battered foods if they're frozen, like breaded chicken or fish. If you want to make food with wet batter like churros, youll need a deep fat fryer. Having tried frozen food, vegetables, and the benchmark for all frying fries, we were pleased with the results of both appliances.
Our team felt that deep frying gave the perfect results, as expected. However, Millie, our air fryer expert, preferred the taste of her air fryer's food. She told us that tthe air fryer and deep fryer produced food which was shockingly similar in taste. The main difference was, when deep-frying, I wasnt able to season my fries until after I had cooked them, which meant that the air fried plate was more flavorful. You could say that the flavors were baked right in during the cooking process.
If there isnt much difference in the way of taste, air fryers might win overall, since they have a lower fat content. However, if you want to make churros, youll need a deep fat fryer.
(Image credit: GettyImages)
Cleaning up
WINNER: Air fryer
A common grievance with deep-fryers is the clear-up process. Oil is tough to clean and, when hot, the fryer will likely spit oil onto your surfaces. Your food will have oil sitting on it after cooking, so youll want some kitchen roll to soak that up.
Once finished with frying, youll need to wait for the oil in your deep fat fryer to cool before either disposing of it, or storing it somewhere. The most common solution is to let your oil cool, pour it into a nonrecyclable container and either keep it, or put it in the garbage. Oil also has a lingering smell, so make sure you ventilate your kitchen.
On the whole, air fryers are easy to clean. They come with removable baskets which are often dishwasher safe. There isnt much oil involved in the process, so it doesnt get as messy as deep frying.
(Image credit: GettyImages)
Cost
WINNER: deep fat fryer
Air fryers tend to have a higher upfront cost than deep fat fryers. You can buy ovens with integrated air fryers if you are looking for value. We love the Instant Pot Duo Crisp with Ultimate Lid for covering multiple functions in one. Deep fryers tend to be less expensive, however, youll need to replace the oil in the deep fryer regularly. An example of a comparable deep fat fryer is the Progress EK2969P Compact Deep Fat Fryer (opens in new tab). Its small and easy to store.
Instant Pot Duo Crisp with Ultimate Lid
We love this because it's so much more than an air fryer. It performed exceptionally, was easy to clean, and had capacity for everything from roast chickens to mash potatoes. We loved that this has 11 different functions, so is an appliance that can do more than air fry.
Progress EK2969P Compact Deep Fat Fryer
Millie, our expert, liked it because it's a competitive size in comparison to air fryers. It's easy to store and doesn't need a huge amount of oil.However, because it is small, the capacity isn't particularly large, so it is really a single-person appliance.
(Image credit: Amazon)
(Image credit: Beautiful Kitchenware)
Size and look
WINNER: air fryer
As air fryers continue to improve, they are getting smaller, more storable, and much slicker. If you want to pack it into a drawer, the Ninja Max XL Air Fryer is a brilliant option. Equally, our team loved the look of Beautiful by Drew Barrymore Touchscreen Air Fryer to leave on your countertop. Deep fryers have less of an aesthetic appeal, but you can buy small ones and stow them away in a cupboard.
The Ninja Max XL Air Fryer can crisp up fries in minutes and is perfectly sized for small households, but its plastic finish lacks refinement
Beautiful 6-Quart Digital Air Fryer
The Beautiful 6-Quart Digital Air Fryer stands out thanks to its attractive design, which will look right at home in any contemporary kitchen.
(Image credit: Cosori)
Our verdict
For me, the air fryer is the clear winner. Even though the upfront cost can be a little more, its easier to store, clean, and use. The taste test really helps the air fryer sit in the top spot for me; its a healthier option, without compromising on flavor or texture. However, if you are looking to make churros and battered food, youll need to buy a deep fryer.
(Image credit: Getty Images)
How we test
We like all of our products to have been fried-and-tested, so we make sure that we have personally used an appliance before reviewing it. Where we haven't tried it, we research and read reviews thoroughly.
We were unable to try a deep fryer, but, luckily, Millie had already tested the T-Fal Actifry Genius + (alongside many others) against a Progress EK2969P Compact Deep Fat Fryer.
When testing, Millie was assessing each appliance on a number of factors:
Noise: Lots of noise doesn't always equate to lots of power and can make it hard to do other things around the house.
Speed: Deep fryers are quicker in cooking time, so it was important to look at how long these appliances would take exactly. Fries would take around 25 minutes in the air fryer, but some on our best air fryer list took 12 minutes.
Looks: These are often on your countertops, so we wanted to make sure that we accounted for how these look. In our roundup, we highlighted the less attractive features, if there were any.
Cleaning: Cleaning an air fryer is advertised as easy. Most baskets can do in the dishwasher. This was a key factor for choosing the air fryer over the deep fryer, so we scrutinised cleaning.
For more insight, our review guidelines explain more about our product review process.
For the most part, yes. Our expert tester, Millie Fender, told us that her partner couldnt tell the difference between most of the foods which she tested in the air fryer and deep fryer. However, if youre being picky, and looking for that guilty-pleasure grease, youll need a deep fat fryer.
That depends on what health means to you. Air fryers are praised for using less oil to cook your food. For example, rather than plunging fries into a deep fat fryer, you will use a tablespoon, at most, of oil in an air fryer. This means that the fat content of your food will be reduced. This is considered to be generally healthier, but that doesnt apply to all people.
Yes, but not homemade batter. You can make bacon, fries, vegetables, and heat up frozen battered food like chicken and fish or fish. However, the air fryer cannot crisp up a wet batter like a deep fryer can.
In many instances, yes. As above, you can do most of the jobs of a deep fryer with an air fryer, including making competitively crispy and fluffy fries.
Air fryers take longer to cook your food. They can take up to twenty minutes where the deep fryer might only take two. That being said, the clean-up process is much faster with an air fryer.
Yes. If you have a deep pot or pan, some oil, and a slotted spoon you can use your home equipment as a fryer. This is a good option for saving on space too.
Vegetable oil, canola oil, and peanut oil are the most popular options. They have a higher smoke point, so are the best oils to use.
People tend to recommend that you change the oil after eight to ten uses. The color and quality of the oil will affect the taste, so it depends how sensitive you are to flavor.
There are lots of benefits to both appliances and you can use them to make some great meals and snacks. Deep fryers are classic and, in many ways, offer you more versatility in what you can fry. However, lots of air fryers are becoming integrated into other multi-cookers, which offer fantastic value for money. Think about space, taste, and price and you won't go wrong.
Social Links Navigation
Millie Fender is Head of Reviews. She specializes in cooking appliances and also reviews outdoor grills and pizza ovens. She was tasked with reviewing the market leading air fryers, so is our expert on the topic. When she's not putting air fryers, and other appliances, through their paces in our testing kitchen, she'll be using the products at home in her day-to-day life.
More here:
Air Fryer vs Deep Fat Fryer: fried-and-tested by experts - Homes & Gardens
Ontario poll shows deep dissatisfaction with Ford government despite high party support – Global News
Premier Doug Fords government is receiving poor marks for its handling of nearly all of the issues that are top of mind for Ontarians, according to a new public opinion poll.
Story continues below advertisement
The Angus Reid survey of 881 Ontario residents found that while a majority would still vote for the Progressive Conservatives if an election were to be held today, theres an underlying dissatisfaction with how the government is performing.
If an election was held today, 38 per cent of respondents said they would vote for Doug Fords PC Party and 30 per cent said they would support the Ontario NDP.
Support for the leaderless Ontario Liberals dropped to 20 per cent and Ontario Greens remain steady at six per cent of the total projected vote.
The poll, however, is less encouraging when it comes to key issues such as cost of living, housing affordability and health care.
A total of 83 per cent of those polled felt the government was doing a poor or very poor job on the issue of housing affordability, with 81 per cent critical of Ontarios record on the cost of living and inflation.
Trending Now
Story continues below advertisement
Health care an area the Ford government focused a flurry of announcements and new legislation on at the start of the year did not fare much better.
A total of 78 per cent of those polled felt the province had done a poor or very poor job on the health care file, compared to 19 per cent who felt it was good or very good.
Of all the issues polled, Ontarians appear to have the best impression of the Ford governments relationship with Ottawa, with just 47 per cent responding with poor or very poor.
Angus Reid suggested that even the provinces reported victories may not be registering much public support.
Its poll found just 37 per cent felt the government was doing a good job on the economy and job creation. This, after the province announced a Volkswagen gigafactory would open in St. Thomas, Ont. in 2027, the polling group said.
© 2023 Global News, a division of Corus Entertainment Inc.
Read the rest here:
Ontario poll shows deep dissatisfaction with Ford government despite high party support - Global News
For ChatGPT creator OpenAI, Italys ban may just be the start of trouble in Europe – Fortune
OpenAI CEO Sam Altman loves Italy, but the affection may not be mutualat least not when it comes to OpenAIs flagship product, ChatGPT.
Italy temporarily banned ChatGPT last week on the grounds that it violates Europes strict data privacy law, GDPR. OpenAI immediately complied with the ban, saying it would work with Italian regulators to educate them on how OpenAIs A.I. software is trained and operates.
We of course defer to the Italian government and have ceased offering ChatGPT in Italy (though we think we are following all privacy laws), Altman tweeted, adding that Italy is one of my favorite countries and I look forward to visiting again soon!
The comments drew plenty of snark from other Twitter users for its slightly tone deaf, ugly American vibes. Meanwhile, Italys deputy prime minster took the countrys data regulator to task, saying the ban seemed excessive. But Romes decision may be just the start of generative A.I.s problems in Europe. As this newsletter was preparing to go to press, there were reports Germany was also considering a ban.
Meanwhile, here in the U.K., where Im based, the data protection regulator followed Italys ban with a warning that companies could very well fall afoul of Britains data protection laws too if they werent careful in how they developed and used generative A.I. The office issued a checklist for companies to use to help ensure they are in compliance with existing laws.
Complying with that checklist may be easier said than done. A number of European legal experts are actively debating whether any of the large foundation models at the core of todays generative A.I. boomall of which are trained on vast amounts of data scraped from the internet, including in some cases personal informationcomply with GDPR.
Elizabeth Renieris, a senior researcher at the Institute for Ethics in AI at the University of Oxford who has written extensively about the challenges of applying existing laws to newly-emerging technology such as A.I. and blockchain, wrote on Twitter that she suspected GDPR actions against companies making generative A.I. will be impossible to enforce because data supply chains are now so complex and disjointed that its hard to maintain neat delineations between a data subject, controller, and processor (@OpenAI might try to leverage this). Under GDPR, the privacy and data protection obligations differ significantly based on whether an organization is considered a controller of certain data, or merely a processor of it.
Lilian Edwards, chair of technology law at the University of Newcastle, wrote in reply to Renieris, These distinctions chafed when the cloud arrived, frayed at the edges with machine learning and have now ripped apart with large models. No-one wants to reopen GDPR fundamentals but I am not clear [the Court of Justice of the European Union] can finesse it this time.
Edwards is right that theres no appetite among EU lawmakers to revisit GDPRs basic definitions. Whats more, the bloc is struggling to figure out what to do about large general-purpose models in the Artificial Intelligence Act it is currently trying to finalize, with the hope of having key EU Parliamentary committees vote on a consensus version on April 26. (Even then, the act wont be really be finalized. The whole Parliament will get to make amendments and vote in early May and there will be further negotiation between the Parliament, the EU Commission, which is the blocs executive arm, and the European Council, which represents the blocs various national governments.) Taken together, there could be real problems for generative A.I. based on large foundation models in Europe.
At an extreme, many companies may have to follow OpenAIs lead and simply discontinue offering these services to EU citizens. It is doubtful European politicians and regulators would want that outcomeand if it starts to happen, they will probably seek some sort of compromise on enforcement. That alone may not be enough. As has been the case with GDPR and trans-Atlantic data sharing, European courts have been quite open to citizens groups going to court and obtaining judgements based on strict interpretations of the law that force national data privacy regulators to act.
At a minimum, uncertainty over the legal status of large foundation models may make companies, especially in Europe, much more hesitant to deploy them, especially in cases where they have not trained the model from scratch themselves. And this might be the case for U.S. companies that have international operations tooGDPR applies not just to customer data, but also employee data, after all.
With that, heres the rest of this weeks news in A.I.
Jeremy Kahn@jeremyakahnjeremy.kahn@fortune.com
U.K. government releases A.I. policy white paper. The British governments Department for Science, Innovation and Technology published a white paper on how it wants to see A.I. governed. It urges a sector and industry-specific approach, saying regulators should establish tailored, context-specific approaches that suit the way A.I. is actually being used in their sectors, and for applying existing laws rather than creating new ones. The recommendations also lay out high level principles in five main areas: safety, security, and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress. While some A.I. and legal experts praised the sector-specific approach the white paper advocates, arguing it will make the rules more flexible than a one-size-fits-all approach and promote innovation, others worried that different regulators might diverge in their approach to identical issues, creating a confusing and messy regulatory patchwork that will actually inhibit innovation, CNBC reported.
Bloomberg creates its own LLM, BloombergGPT, for finance. Bloomberg, where I worked before coming to Fortune, is not new to machine learning. (Ive periodically highlighted some of the ways Bloomberg has been using large language models and machine learning in this newsletter.) The company has access to vast amounts of data, much of it proprietary. This past week, Bloomberg unveiled Bloomberg GPT, a 50 billion parameter LLM, and the first ultra-large language GPT-based model the financial news company has ever trained. This puts it pretty far up there in the rankings of large models, although still far smaller than the largest models OpenAI, Google Brain, DeepMind, Nvidia, Baidu and some other Chinese researchers have built. The interesting thing is that 51% of the data Bloomberg used was financial data, some of it its own proprietary data, that it curated specifically to train the model. The company reported that BloombergGPT outperformed general-purpose LLMs on tasks relevant to Bloombergs own use cases, such as recognizing named entities in data, performing sentiment analysis on news and earnings reports, and answering questions about financial data and topics. Many think this is a path many large companies with access to lots of data will choose to take going forwardtraining their own proprietary LLM on their own data and tailored to their own use casesrather than relying on more general foundation models built by the big tech companies.
Research collective creates open-source version of DeepMind visual language model as step towards an open-source GPT-4 competitor. The nonprofit A.I. research group LAION released a free, open-source version of Flamingo, a powerful visual language model created by DeepMind a year ago. Flamingo is a fully multi-modal model, meaning it can take in both images, videos, and text as inputs and output in all those modes too. That enables it to describe images and also answer questions about them, as well as generating images (or possibly video) from text, similar to the way Stable Diffusion, Midjourney and DALL-E can. Flamingo had some interesting twists in its architecture that enable it to do thisincluding a module called a perceiver remixer that reduces complex visual data to a much lower number of tokens to be used in training, the use of a frozen language model, and other clever innovations you can read about in DeepMinds research paper.
Any way, LAION decided to copy this architecture, apply it to its own open-source, multi-modal training data and the result is Open Flamingo.
Why should you care? Because LAION explicitly says it is doing this in the hopes that someone will be able to use Open Flamingo to train a model that essentially replicates the capabilities of GPT-4 in its ability to ingest both text and images. This means everyone and anyone might soon have access to a model as powerful as OpenAIs currently most powerful A.I., GPT-4, at essentially no cost. That could either be a great thing or a terribly dangerous thing, depending on your perspective.
And another subtle dynamic here that doesnt often get discussed: One of the things that is continuing to drive OpenAI to release new, more powerful models and model enhancements (such as the ChatGPT plugins) so quickly is the competition it is facing not just from other tech players, such as Google, but the increasingly stiff competition it faces from open source alternates. These open source competitors could easily erode the marketshare OpenAI (and its partner Microsoft) was hoping to control.
In order to maintain a reason for customers to pay for its APIs, OpenAI is probably going to have to keep pushing to release bigger, more powerful, more capable modelswhich, if you believe these models can be dangerous (either because they are good for producing misinformation at scale, or because of cybersecurity risks, or because you think they just might hasten human extinction, then anything that incentivizes companies to put them out in the world with less time for testing and for installing guardrails, is probably not a good thing).
ChatGPT gave advice on breast cancer screenings in a new study. Heres how well it didby Alexa Mikhail
Former Google CEO Eric Schmidt says the tech sector faces a reckoning: What happens when people fall in love with their A.I. tutor?by Prarthana Prakash
Nobel laureate Paul Krugman dampens expectations over A.I. like ChatGPT: History suggests large economic effects will take longer than many people seem to expectby Chloe Taylor
Google CEO wont commit to pausing A.I. development after experts warn about profound risks to societyby Steve Mollman
How should we think about the division over last weeks open letter calling for a sixth month pause in the development of any A.I. system more powerful than GPT-4? I covered some of this in Fridays special edition of Eye on A.I. But theres a very nice essay on how politicized discourse over A.I. risks is becoming, from VentureBeats A.I. reporter, Sharon Goldman. Its worth a read. Check it out here.
Also, how should we feel about Sam Altman, the OpenAI CEO, who claims to be both a little bit frightened about advanced A.I. and, simultaneously, hellbent on creating it? Well, dueling profiles of Altman, one in the New York Times and one in the Wall Street Journal, try to sort this out. Both are worth a read.
The cynical take on Altman was put forth by Brian Merchant in an op-ed in the Los Angeles Timesnamely, that fear-mongering about A.I., particularly about its ability to replace lots of peoples jobs, only serves to hype the power of existing technologies and OpenAIs brand, boosting its sales.
I agree with some of Merchants take. I do think OpenAI has very much become a commercially-motivated enterprise, and that this explains a lot about why it is releasing powerful A.I. models so quickly and why it has done things like create the ChatGPT plugins. But, Im not sure about Merchants take on Altman himselfthat Altmans conflicted genius schtick is simply that, schtick. Altmans concern with A.I. Safety is not some newfound preoccupation that came about only once he had something to sell. Its clear from those Altman profiles that AGI and its potential for good and ill have been preoccupations of Altmans for a long time. Its what led him to cofound OpenAI with Elon Musk in the first place. And remember, when it started, OpenAI was just a nonprofit research lab, dedicated to open sourcing everything it did. Altman didnt set out to run a commercial venture. (He may have thought there would be money to be made down the line, but making money doesnt seem to have been his real rationale. He was already enormously wealthy at the time.) So I think Altmans simultaneous expressions of longing for AGI and fear of it are not just about hyping A.I. Im not saying the rationale is noble. I just dont think commercial motives explain Altmans strange stance on advanced A.I. I think it has a lot more to do with ego and with a kind of messiah complexor at the very least, a kind of messianic thinking.
In fact, a lot of stuff people who believe in AGI say only makes sense if viewed in religious terms. AGI believers are a lot like evangelicals waiting for the rapture. They both want the second coming and wish to hasten its arrival, and yet on some level they fear it. And while some of these folks are cynical in their beliefsthey only talk about the Armageddon because they have Bibles to sell (that would be Merchants take)others are sincere believers who really do want to save souls. That doesnt mean you have to agree with these folks. But intentions do make a difference. Which do you think Altman is: Bible salesman or modern day prophet?
See more here:
For ChatGPT creator OpenAI, Italys ban may just be the start of trouble in Europe - Fortune
The future, one year later – POLITICO – POLITICO
In this Oct. 30, 2008, photo, Electric Time Company employee Dan Lamoore adjusts the color on a 67-inch square LED color-changing clock at the plant in Medfield, Mass. | Elise Amendola/AP photo
When this newsletter launched exactly one year ago today, we promised to bring you a unique and uniquely useful look at questions that are addressed elsewhere as primarily business opportunities or technological challenges.
We had a few driving questions: What do policymakers need to know about world-changing technologies? What do tech leaders need to know about policy? Could we even get them talking to each other?
Were still working on that last one. But what we have brought you is a matter of public record: Scoops on potentially revolutionary technologies like Web3, a blow-by-blow account of the nascent governing structure of the metaverse and a procession of thinkers on the transformation AI is already causing, and how we might guide it.
Yeah, about that. In just a year, AI has gone from a powerful, exciting new technology still somewhat on the horizon to a culture-and-news-dominating, potentially even apocalyptic force. Change is always happening in the tech world, but sometimes it happens fast. And as the late Intel chief Gordon Moore might have said, that speed begets more speed, with seemingly no end in sight.
The future already looks a lot different than it looked in April 2022. And we dont expect it to look the same next year, or next month, or even next week. Theres a lot of anxiety that AI in particular could change the future much, much faster than were ready to address.
With that in mind I spoke yesterday with Peter Leyden, founder of the strategic foresight firm Reinvent Futures and author of The Great Progression: 2025 to 2050 a firmly optimistic reading of how technology will change society in radical ways about how the rise of generative AI has shaken up the landscape, and what he sees on the horizon from here.
This is the kind of explosive moment that a lot of us were waiting for, but it wasnt quite clear when it was going to happen, Leyden said. Ive been through many, many different tech cycles around, say, crypto, that havent gone down this path this is the first one that is really on the scale of the introduction of the internet.
Tech giants have been spending big on AI for more than a decade, with Googles acquisition of DeepMind as a signal moment. Devoted sports viewers might remember one particularly inescapable 2010s-era commercial featuring the rapper Common proselytizing about AI on Microsofts behalf. And there is, of course, a long cultural history of AI speculation, dating back to James Camerons Terminator and beyond.
There is a kind of parallel to the mid-90s, where people had a very hard time understanding both the digitization of the world and the globalization of the world that were happening, Leyden said. Were seeing a similar tipping point with generative AI.
From that perspective, the current generative AI boom begs for a historical analogue. How about America Online? It might seem hopelessly dated now, but like ChatGPT it was a ubiquitous product that brought a revolutionary technology into millions of homes. From the perspective of 20 years from now, a semi-sophisticated chatbot might seem like the Youve got mail of its time.
AI might seem a chiefly digital disruptor right now, but Leyden, who has a pretty good track record as a prognosticator, believes it could revolutionize real-world sectors from education to manufacturing to even housing.
Weve always thought those things are too expensive and cant be solved by technology, and weve finally now crossed the threshold to say Oh wait, now we could apply technology to it, Leyden said. The next five to 10 years are going to be amazing as this superpower starts to make its way through all these fields.
AI is also already powering innovation in other fields like energy, biotech, and media. Thats where its an especially salient comparison with the internet as a whole, not just a platform like social media. Its an engine, not the vehicle itself, and there are millions of designs yet to be built around it.
Largely for that reason, its nearly impossible to predict whats going to happen next with AI. Maybe artificial general intelligence really will arise, posing an entirely different set of problems than the current policy concerns of regulating bias and accountability in decision-making algorithms. Or maybe it will start solving problems, wickedly difficult ones, like nuclear fusion and mortality and space survival.
To get back to our mission here: We cant know. What we can do is continue to cover the bleeding edge of these technologies as they exist now, and where the people in charge of building and governing them aim to steer their development and, by proxy, ours.
A message from TikTok:
TikTok is building systems tailor-made to address concerns around data security. Whats more, these systems will be managed by a U.S.-based team specifically tasked with managing all access to U.S. user data and securing the TikTok platform. Its part of TikToks commitment to securing personal data while still giving the global TikTok experience people know and love. Learn more at http://usds.TikTok.com.
A pair of George Mason University technologists are recommending the government take a novel, deliberate approach to AI regulation.
In an essay for GMUs Mercatus Center publication Discourse, Matthew Mittelsteadt and Brent Skorup propose a framework they call AI Progress, a novel framework to help guide AI progress and AI policy decisions. Their big ideas, among a handful of others:
People will need time to understand the limitations of this technology, when not to use it and when to trust it (or not), they write nearing their conclusion. These norms cannot be developed without giving people the leeway needed to learn and apply these innovations.
A message from TikTok:
Health and tech heavy hitters are teaming up to make their own recommendations about how AI should be used specifically in the world of health care.
As POLITICOs Ben Leonard reported today for Pro subscribers, the Coalition for Health AI, which includes Google, Microsoft, Stanford and Johns Hopkins, released a Blueprint for Trustworthy AI that calls for high transparency and safety standards for the techs use in medicine.
We have a Wild West of algorithms, Michael Pencina, coalition co-founder and director of Duke AI Health, told Ben. Theres so much focus on development and technological progress and not enough attention to its value, quality, ethical principles or health equity implications.
The report also recommends heavy human monitoring of AI systems as they operate, and a high bar for data privacy and security. The coalition is holding a webinar this Wednesday to discuss its findings.
Stay in touch with the whole team: Ben Schreckinger ([emailprotected]); Derek Robertson ([emailprotected]); Mohar Chatterjee ([emailprotected]); Steve Heuser ([emailprotected]); and Benton Ives ([emailprotected]). Follow us @DigitalFuture on Twitter.
If youve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.
A message from TikTok:
TikTok has partnered with a trusted, third-party U.S. cloud provider to keep all U.S. user data here on American soil. These are just some of the serious operational changes and investments TikTok has undertaken to ensure layers of protection and oversight. Theyre also a clear example of our commitment to protecting both personal data and the platforms integrity, while still allowing people to have the global experience they know and love. Learn more at http://usds.TikTok.com.
Read more here:
The future, one year later - POLITICO - POLITICO
Google denies Bard was trained with ChatGPT data – The Verge
Googles Bard hasnt exactly had an impressive debut and The Information is reporting that the company is so interested in changing the fortunes of its AI chatbots, its forcing its DeepMind division to help the Google Brain team beat OpenAI with a new initiative called Gemini. The Informations report also contains the potentially staggering thirdhand allegation that Google stooped so low as to train Bard using data from OpenAIs ChatGPT, scraped from a website called ShareGPT. A former Google AI researcher reportedly spoke out against using that data, according to the publication.
But Google is firmly and clearly denying the data is used: Bard is not trained on any data from ShareGPT or ChatGPT, spokesperson Chris Pappas tells The Verge.
Pappas declined to answer whether Google had ever used ChatGPT data to train Bard in the past. Unfortunately, all I can share is our statement from yesterday, he says.
According to The Informations reporting, a Google AI engineer named Jacob Devlin left Google to immediately join its rival OpenAI after attempting to warn Google not to use that ChatGPT data because it would violate OpenAIs terms of service, and that its answers would look too similar. One source told the publication that Google stopped using that data after his warnings. Perhaps it threw out that portion of the training, too.
Update March 30th, 2:02PM ET: Google would not answer a follow-up question about whether it had previously used ChatGPT data form Bard, only that Bard isnt trained on data from ChatGPT or ShareGPT.
Follow this link:
Google denies Bard was trained with ChatGPT data - The Verge
The Rise of AI Chatbots in Hearing Health Care : The Hearing Journal – LWW Journals
One of the most exciting recent technological innovations has been the deployment of artificial intelligence (AI) chatbots based on large language models (LLMs). AI chatbots are a type of generative AI that can generate text. Other examples of generative AI create pictures (e.g., DALL-E or Stable Diffusion; see Figures 1 and 2) or music (e.g., Jukebox). In November 2022, Open AI launched ChatGPT publicly, an AI chatbot that can engage in conversations in response to questions from the user, so-called prompts, generating responses to users questions (i.e., prompts) that are almost indistinguishable from those of humans. The launch of ChatGPT represents a technological revolution, one that could change the face of health care as we know it, including hearing health care. ChatGPT is not an isolated example but part of a global race of who can develop the most compelling AI chatbot. Besides Open AI, which is financed by Microsoft, other large corporations such as Meta, Google, and Tencent have launched similar proprietary products based on LLMs (e.g., LaMDA).
DALLE-created artwork. Prompt: The rise of AI chatbots in hearing healthcare, digital art. ChatGPT, AI, artificial intelligence, hearing health care, chatbots.
DALLE-created artwork. Prompt: A futuristic illustration of the Planet of the AI chatbots in hearing health care, a movie about invading an ear that is also a hospital. ChatGPT, AI, artificial intelligence, hearing health care, chatbots.
AI Chatbots in Hearing Health Care Applications, Risks, and Research Priorities for Patients, Clinicians, and Researchers.
AI chatbots are computer programs that use natural language processing (NLP) to communicate with humans. They are trained on large collections of language (e.g., all written books and most of the internet) to predict what response is most likely to a wide range of queries. For a human user, it may appear as if the system understands the question and can provide personalized advice, recommendations, and support. In reality, chatbots have no understanding of the world around them nor of the human body and its health status. Still, the potential applications for AI chatbots in health care are broad, with use cases for patients, clinicians, researchers, and training students. 1
The broad trend for the use of AI chatbots in health care is to increase accessibility (to medical knowledge) and affordability of care. Chatbots can provide 24/7 access to health care advice and support, reducing the need for in-person consultations, and potentially improving patient outcomes. Additionally, AI chatbots could potentially provide valuable insights and data to health care professionals, allowing them to make more informed decisions about patient care. More transparency on the data these chatbots have access to and use to produce their output is important and has been raised as a concern regarding existing systems. 2
In hearing health care, chatbots could be used to support patients, clinicians, and researchers (Table 1).
Patients can benefit from AI chatbots in hearing health care in various ways. One potential application is for initial screening and the recommendation of interventions. For example, a patient could interact with a chatbot that asks about their symptoms and hearing history and provides recommendations for self-management of symptoms, further evaluation, or treatment based on the patients responses. 3 This could be particularly useful in cases where patients are unsure whether or not they are experiencing hearing loss, or are hesitant to seek medical attention, or where a profound hearing loss inhibits a conversation with a clinician. Chatbots can also serve as educational resources, self-management tools and screening tools for comorbidities, including social needs. 4 Patients can receive information about hearing health, prevention tips, and advice on how to manage hearing conditions. Chatbots can provide information on the use of management options such as hearing aids, how to change batteries, and troubleshooting common issues. However, a potential risk is that chatbots may not provide accurate recommendations, leading to delayed diagnosis or inappropriate treatment.
Clinicians can benefit from AI chatbots in hearing health care in various ways. Chatbots can assist with data collection and analysis by collecting data on patients hearing health, such as self-reported symptoms or hearing aid usage. Chatbots can provide summary reports or visualizations to help clinicians make treatment decisions, such as providing a summary report of a patients hearing test results, highlighting areas of concern, and providing recommendations for further evaluation or treatment. Another potential application is to assist with decision-making and treatment planning. For medical applications, Google and Deep Mind developed Med-Palm, a LLM that incorporated clinical knowledge that has been evaluated using newly developed benchmarks. 5 Chatbots that unlock clinical knowledge could suggest treatment options based on a patients hearing health history and symptoms and provide information on the benefits and risks of each option. For instance, chatbots could suggest a specific type of treatment based on a patients hearing test results and preferences. Chatbots can also support clinicians in their communication of information in more accessible and person-centered ways.
A potential risk is that chatbots may not provide the same level of clinical judgment and decision-making as a human health care professional. Additionally, there is a risk that the data collected by chatbots may be inaccurate, incomplete, biased, or dated, which could lead to misdiagnosis or inappropriate treatment.
Researchers can benefit from AI chatbots in hearing health care in various ways. Chatbots can collect large amounts of data from diverse populations, providing researchers with valuable insights into the prevalence and impact of hearing loss. For instance, chatbots can potentially collect data on the prevalence of tinnitus in different countries or regions. Another potential application is to facilitate clinical trials and research studies. Chatbots can screen potential participants for eligibility, collect informed consent, and administer study protocols. 4 For example, chatbots can collect self-reported data on hearing aid usage and satisfaction in large-scale clinical trials.
However, a potential risk is that the data collected by chatbots may be incomplete or biased, particularly if the chatbots are only accessible to certain populations or if the questions asked by the chatbots are not culturally sensitive or appropriate for all participants. 2 Additionally, chatbots may inadvertently exclude certain populations from research studies, such as individuals who do not have access to technology or who are not comfortable using it.
There is an urgent priority to investigate the (clinical) application of AI chatbots in hearing health care. General guidelines for the appropriate use of AI chatbots by researchers are being developed in this rapidly changing landscape. Academic journals have broadly agreed that chatbots may not be coauthors on research papers since they cannot take responsibility for their work. 2,6 In terms of hearing research applications, priority should be given to evaluate the validity and reliability of chatbots in collecting and analyzing hearing health data. Researchers and clinicians need to ensure that chatbots can provide accurate recommendations and treatment options and that the data collected by chatbots are reliable.
Usability is another important research priority to ensure that chatbots are user-friendly and accessible to as many patients as possible, regardless of their age or technological literacy. Cultural sensitivity is also important to ensure that chatbots are culturally sensitive and appropriate for all populations. There are also important ethical considerations for using chatbots in hearing health care, including issues related to informed consent, data privacy, and data security. Researchers will also need to assess long-term outcomes of using chatbots in hearing health care. This includes evaluating the impact of chatbots on patient outcomes such as quality of life, satis-faction, and adherence to treatment. Overall, the research priorities for AI chatbots in hearing research should focus on ensuring that chatbots are accurate, reliable, accessible, and culturally sensitive.
Guidelines for appropriate use of AI chatbots by clinicians or patients are not yet available. As the language models have been trained largely by using text from the internet, they are likely to have the same general opinions, stereotypes, and biases that are present on the internet. For this reason, we see a task for specialists and patient organizations to test what prompts yield the best results and provide the guidelines to avoid misuse or misunderstandings.
The rise of AI chatbots (based on LLMs) represents a significant technological advancement that has the potential to revolutionize hearing health care. AI chatbots have the potential to provide personalized advice and support to patients while also providing valuable insights and data to health care professionals. However, it is important to consider the potential risks and benefits of AI chatbots and to prioritize further research to ensure that these technologies are used ethically, effectively, and safely in hearing health care.
We would like to acknowledge the contribution of ChatGPT, an AI chatbot trained by Open AI using a large language model (LLM), in providing valuable insights and guidance for this article. We experimented with prompt engineering and had conversations with ChatGPT playing the role of patient and clinician to get a first impression of what AI chatbots, such as ChatGPT, could and could not offer.
More:
The Rise of AI Chatbots in Hearing Health Care : The Hearing Journal - LWW Journals
The state of artificial intelligence: Stanford HAI releases its latest AI … – SiliconANGLE News
The Stanford Institute for Human-Centered Artificial Intelligence today released the latest edition of its AI Index Report, which explores the past years machine learning developments.
Stanford HAI, as the institute is commonly known, launched in early 2019. It researches new AI methods and also studies the technologys impact on society. It releases its AI Index Report annually.
The latest edition of the study that was published today includes more than 350 pages. It covers a long list of topics, including the cost of AI training, efforts to mitigate bias in language models and the technologys impact on public policy. In each area that it surveys, the report points out multiple notable milestones that were reached during the past year.
The most advanced neural networks have become more complicated over the past year. Stanford HAI points to Google LLCs Minerva large language model as one example. The model, which debuted last June, features 540 billion parameters and took nine times more compute capacity to train than OpenAI LPs GPT-3.
The growing hardware requirements of AI software are reflected in the rising cost of machine learning projects. Stanford HAI estimates that PaLM, another Google model released last year, cost $8 million to develop. Thats 160 times more than GPT-2, a predecessor to GPT-3 that OpenAI released in 2019.
Though AI models can perform significantly more tasks than a few years ago, they continue to have limitations. Those limitations span several different areas.
In todays report, Stanford HAI highlighted a 2022 research paper that found advanced language models struggle with some reasoning tasks. Tasks that require planning are often particularly challenging for neural networks. Last year, researchers also identified many cases of AI bias in both large language models and neural networks optimized for image generation.
Researchers efforts to address those issues came to the fore in 2022. In todays report, Stanford HAI highlighted how a new model training technique called instruction tuning has shown promise as a method for mitigating AI bias. Introduced by Google in late 2021, instruction training involves rephrasing AI prompts to make them easier to understand for a neural network.
Last year, researchers not only developed more capable AI models but also found new applications for the technology. Some of those applications led to scientific discoveries.
In October 2022, Googles DeepMind machine learning unit detailed a new AI system called AlphaTensor. DeepMind researchers used the system to develop a more efficient way of carrying out matrix multiplications. A matrix multiplication is a mathematical calculation that machine learning models use extensively in the process of turning data into decisions.
Last year also saw scientists apply AI to support research in a range of other areas, Stanford HAI pointed out. One project demonstrated that AI could be used to discover new antibodies. Another project, also led by Googles DeepMind, led to the development of a neural network that can control the plasma in a nuclear fusion reactor.
Stanford HAIs new report also dedicates multiple chapters to the impact of AI on society. Though large language models have only entered the public consciousness in recent months, AI is already making an impact across several areas.
In 2021, only 2% of federal AI-related bills proposed by U.S. lawmakers were passed into law. Last year, that number jumped to 10%. At the state level, meanwhile, 35% of all AI-related bills passed in 2022.
The impact of machine learning is also being felt in the education sector. According to Stanford HAIs research, 11 countries have officially endorsed and implemented a K-12 AI curriculum as of 2021. Meanwhile, the percentage of new computer science Ph.D. graduates from U.S. universities who specialized in AI nearly doubled between 2010 and 2021, to 19.1%.
More:
The state of artificial intelligence: Stanford HAI releases its latest AI ... - SiliconANGLE News
9 signs you’re a deep thinker whose mind works differently – Hack Spirit
If thinking were an Olympic sport, Im sure Id be in with a chance of a medal.
And believe me, Im certainly not bragging here.
In fact, there have been plenty of occasions I wish my deep thinking came with an off switch.
I suspect every deep thinker can relate to the chattering voice that on occasion they wished would just shut up.
But the truth is that I also love my ability to think deeply.
Not only do I think it makes me a more interesting person, but it also brings a richness to life that I wouldnt want to be without.
Maybe you can relate?
If youre a deep thinker, there are most likely certain signs that you recognize all too well.
It probably comes as know surprise. When you spend so much time in your own head, you tend to get to grips with what makes you tick.
Deep thinkers are reflective.
They are naturally analytical and so they spent time considering their strengths and weaknesses too.
When you have a habit of self-reflection, it means you have high intrapersonal intelligence.
You take the time to think about your thought processes and your feelings. And this introspection builds your self-knowledge.
To me, self-awareness is the greatest gift to arise from deep thinking. Because it brings us the potential for change.
Only when we get to know ourselves can we honestly evaluate ourselves, our lives, and the world we live in.
Deep thinkers are tapped into the subtleties of life.
So they can observe the most minute of details.
Basically, theyre not only good at reading themselves, but theyre also good at reading the room in general.
This can give deep thinkers the gift of social awareness. Because with depth often comes heightened perception.
You may notice that youre a good judge of character and can suss people out quickly. You can probably pick up on someone elses energy or intentions.
What youre actually doing is reading the little signs that maybe other people miss.
Deep thinkers can be detail orientated because they have a habit of closely studying.
First off, can we please do away with the myth that introverts are shy or even quiet.
Sure, some are. But plenty of others are not.
For years I let people tell me I was an extrovert, just because I am a natural communicator, have lots of opinions, and Im far from timid.
But they were so wrong.
Because rather than be a personality type, introversion is so much more.
Introverts brains are wired differently.
Research has found we process stimuli differently and have longer neural pathways.
So it can be more complicated and take longer for our brains to process interactions.
Thats why we need plenty of alone time to recharge, and why we find it stimulating enough simply to be alone in our thoughts.
But what has this all got to do with deep thinkers?
Deep thinkers are often introverts because the very definition of introversion is that your energy tends to be more focused on your own inner world.
So there is often a big cross-over between introversion and deep thinking.
What are introvert tendencies?
Were talking about things like:
There are always two sides to every coin.
As a deep thinker myself I wholeheartedly believe its strengths outweigh its burden.
But I wont pretend it doesnt have its downsides at times.
Personally, my habit of deep thinking needs to be reined in sometimes.
Otherwise, I can fall into stress, low-level anxiety and unnecessary worry. My mind quickly spills over into hypervigilance and over-planning.
The reality is that thinking is hard to simply switch off.
So deep thinking can turn into overthinking or even rumination.
I overwhelm and flood my brain contemplating or trying to preempt. And like an overheating laptop, that stops your brain from functioning properly.
Meditation, breathwork, yoga, journaling, and exercise have become vital tools in my belt to nip overthinking in the bud and give my active brain a rest.
Introverts and deep thinkers often have a natural tendency to enjoy being alone.
It gives them time to spend time contemplating their feelings and thoughts. After all, deep thinking isnt a group activity.
But when youre a really deep thinker, you might find that too much time alone can be bad for your mental health too.
Because as weve seen, deep thinking can slip into overthinking. And youre more likely to do this when you have a lot of time on your hands.
Deep thinkers are often perfectly happy to do very little.
Its not that theyre boring, quite the opposite. Researchers have even found this is a sign of intelligence.
They dont need to be constantly doing something to feel stimulated. Their thoughts provide them with plenty of stimulation.
But just like the yin and yang of life, balancing this with staying active can be important to our well-being.
That way, we dont get too lost in thought.
Moving our bodies, getting lost in activities and the company of other people can pull us back into the present.
Deep thinkers tend not to see the world in black and white. They see all the nuanced shades of grey in between.
You might be naturally good at playing devils advocate.
You arent hasty in drawing conclusions.
You prefer to contemplate the deeper implications of something before making a decision.
This is a great skill to have in life. It ultimately promotes open-mindedness.
Not only that but it encourages empathy.
When we make an effort to understand and contemplate where other people are coming from, its easier to connect with them.
The flip side of seeing life from different angles can be indecisiveness.
When you recognize life isnt so simple, you spend a lot of time analyzing and contemplating your options.
In many circumstances, this is wise. As the saying goes, only fools rush in.
So the ability to break things down and contemplate them logically can be handy.
Unfortunately in other situations, thinking ourselves around in circles probably does little good.
For example, research has suggested for really complicated decisions, going with your gut can be a much better strategy.
Thats because intuition is far more logical than we often give it credit for.
Its not emotional or impulsive. Its actually our unconscious thats at work.
In an instance, its accessed a vast warehouse of information and experiences that are neatly (yet silently) stored away in the back of our brains.
It then presents this to you with a gut feeling about something.
And studies have found it can be a really effective way to make decisions, instead of getting stuck in contemplation and uncertainty.
Curiosity is the great fuel that feeds a deep thinkers mind.
Its a bit like the child who is forever asking but why?.
Your thirst for knowledge can feel insatiable. You just love to figure things out.
There is always another layer to unpeel. There is always another mystery in life to uncover.
You most likely find learning fascinating, regardless of the subject.
Because its the newness of the information or perspective that interests you more than the topic itself.
Everything you learn offers you more ideas and thoughts to contemplate.
Deep thinkers very rarely take things at face value. They have inquisitive natures that cant help but delve further below the surface.
Why isnt merely a question you ask, its a state of mind you adopt.
And that state of mind is one of discovery and curiosity.
At the end of the day, our thoughts power our emotions.
So its very little surprise that deep thinking often leads to deep feeling too.
Deep thinkers are excellent at uncovering a richness to life. They go deeper in every sense, and that means on an emotional level too.
Youre also highly tuned in to other people, and youre extremely conscious of your environment and surroundings.
So in many ways, you have a natural antenna thats going to pick up on a lot.
They say that ignorance is bliss because it shields you from so many things.
But as a deep thinker, you dont (in fact, you cannot) hide. Instead, you face and contemplate all the many facets of life.
Sensitivity is your superpower, and yes, at times your cross to bear as well.
Read the rest here:
9 signs you're a deep thinker whose mind works differently - Hack Spirit
The Future of AI: What Comes Next and What to Expect – The New York Times
In todays A.I. newsletter, the last in our five-part series, I look at where artificial intelligence may be headed in the years to come.
In early March, I visited OpenAIs San Francisco offices for an early look at GPT-4, a new version of the technology that underpins its ChatGPT chatbot. The most eye-popping moment arrived when Greg Brockman, OpenAIs president and co-founder, showed off a feature that is still unavailable to the public: He gave the bot a photograph from the Hubble Space Telescope and asked it to describe the image in painstaking detail.
The description was completely accurate, right down to the strange white line created by a satellite streaking across the heavens. This is one look at the future of chatbots and other A.I. technologies: A new wave of multimodal systems will juggle images, sounds and videos as well as text.
Yesterday, my colleague Kevin Roose told you about what A.I. can do now. Im going to focus on the opportunities and upheavals to come as it gains abilities and skills.
Generative A.I.s can already answer questions, write poetry, generate computer code and carry on conversations. As chatbot suggests, they are first being rolled out in conversational formats like ChatGPT and Bing.
But thats not going to last long. Microsoft and Google have already announced plans to incorporate these A.I. technologies into their products. Youll be able to use them to write a rough draft of an email, automatically summarize a meeting and pull off many other cool tricks.
OpenAI also offers an A.P.I., or application programming interface, that other tech companies can use to plug GPT-4 into their apps and products. And it has created a series of plug-ins from companies like Instacart, Expedia and Wolfram Alpha that expand ChatGPTs abilities.
Many experts believe A.I. will make some workers, including doctors, lawyers and computer programmers, more productive than ever. They also believe some workers will be replaced.
This will affect tasks that are more repetitive, more formulaic, more generic, said Zachary Lipton, a professor at Carnegie Mellon who specializes in artificial intelligence and its impact on society. This can liberate some people who are not good at repetitive tasks. At the same time, there is a threat to people who specialize in the repetitive part.
Human-performed jobs could disappear from audio-to-text transcription and translation. In the legal field, GPT-4 is already proficient enough to ace the bar exam, and the accounting firm PricewaterhouseCoopers plans to roll out an OpenAI-powered legal chatbot to its staff.
A New Generation of Chatbots
A brave new world. A new crop of chatbotspowered by artificial intelligence has ignited a scramble to determine whether the technology could upend the economics of the internet, turning todays powerhouses into has-beens and creating the industrys next giants. Here are the bots to know:
ChatGPT. ChatGPT, the artificial intelligence language model from a research lab, OpenAI, has been making headlines since November for its ability to respond to complex questions, write poetry, generate code, plan vacationsand translate languages. GPT-4, the latest version introduced in mid-March, can even respond to images(and ace the Uniform Bar Exam).
Bing. Two months after ChatGPTs debut, Microsoft, OpenAIs primary investor and partner, added a similar chatbot, capable of having open-ended text conversations on virtually any topic, to its Bing internet search engine. But it was the bots occasionally inaccurate, misleading and weird responsesthat drew much of the attention after its release.
Ernie. The search giant Baidu unveiled Chinas first major rival to ChatGPT in March. The debut of Ernie, short for Enhanced Representation through Knowledge Integration, turned out to be a flopafter a promised live demonstration of the bot was revealed to have been recorded.
At the same time, companies like OpenAI, Google and Meta are building systems that let you instantly generate images and videos simply by describing what you want to see.
Other companies are building bots that can actually use websites and software applications as a human does. In the next stage of the technology, A.I. systems could shop online for your Christmas presents, hire people to do small jobs around the house and track your monthly expenses.
All that is a lot to think about. But the biggest issue may be this: Before we have a chance to grasp how these systems will affect the world, they will get even more powerful.
For companies like OpenAI and DeepMind, a lab thats owned by Googles parent company, the plan is to push this technology as far as it will go. They hope to eventually build what researchers call artificial general intelligence, or A.G.I. a machine that can do anything the human brain can do.
As Sam Altman, OpenAIs chief executive, told me three years ago: My goal is to build broadly beneficial A.G.I. I also understand this sounds ridiculous. Today, it sounds less ridiculous. But it is still easier said than done.
For an A.I. to become an A.G.I., it will require an understanding of the physical world writ large. And it is not clear whether systems can learn to mimic the length and breadth of human reasoning and common sense using the methods that have produced technologies like GPT-4. New breakthroughs will probably be necessary.
The question is, do we really want artificial intelligence to become that powerful? A very important related question: Is there any way to stop it from happening?
Many A.I. executives believe the technologies they are creating will improve our lives. But some have been warning for decades about a darker scenario, where our creations dont always do what we want them to do, or they follow our instructions in unpredictable ways, with potentially dire consequences.
A.I. experts talk about alignment that is, making sure A.I. systems are in line with human values and goals.
Before GPT-4 was released, OpenAI handed it over to an outside group to imagine and test dangerous uses of the chatbot.
The group found that the system was able to hire a human online to defeat a Captcha test. When the human asked if it was a robot, the system, unprompted by the testers, lied and said it was a person with a visual impairment.
Testers also showed that the system could be coaxed into suggesting how to buy illegal firearms online and into describing ways to make dangerous substances from household items. After changes by OpenAI, the system no longer does these things.
But its impossible to eliminate all potential misuses. As a system like this learns from data, it develops skills that its creators never expected. It is hard to know how things might go wrong after millions of people start using it.
Every time we make a new A.I. system, we are unable to fully characterize all its capabilities and all of its safety problems and this problem is getting worse over time rather than better, said Jack Clark, a founder and the head of policy of Anthropic, a San Francisco start-up building this same kind of technology.
And OpenAI and giants like Google are hardly the only ones exploring this technology. The basic methods used to build these systems are widely understood, and other companies, countries, research labs and bad actors may be less careful.
Ultimately, keeping a lid on dangerous A.I. technology will require far-reaching oversight. But experts are not optimistic.
We need a regulatory system that is international, said Aviv Ovadya, a researcher at the Berkman Klein Center for Internet & Society at Harvard who helped test GPT-4 before its release. But I do not see our existing government institutions being about to navigate this at the rate that is necessary.
As we told you earlier this week, more than 1,000 technology leaders and researchers, including Elon Musk, have urged artificial intelligence labs to pause development of the most advanced systems, warning in an open letter that A.I. tools present profound risks to society and humanity.
A.I. developers are locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one not even their creators can understand, predict or reliably control, according to the letter.
Some experts are mostly concerned about near-term dangers, including the spread of disinformation and the risk that people would rely on these systems for inaccurate or harmful medical and emotional advice.
But other critics are part of a vast and influential online community called rationalists or effective altruists, who believe that A.I could eventually destroy humanity. This mind-set is reflected in the letter.
Please share your thoughts and feedback on our On Tech: A.I. series by taking this brief survey.
We can speculate about where A.I. is going in the distant future but we can also ask the chatbots themselves. For your final assignment, treat ChatGPT, Bing or Bard like an eager young job applicant and ask it where it sees itself in 10 years. As always, share the answers in the comments.
Question 1 of 3
Start the quiz by choosing your answer.
Alignment: Attempts by A.I. researchers and ethicists to ensure that artificial intelligences act in accordance with the values and goals of the people who create them.
Multimodal systems: A.I.s similar to ChatGPT that can also process images, video, audio, and other non-text inputs and outputs.
Artificial general intelligence: An artificial intelligence that matches human intellect and can do anything the human brain can do.
Click here for more glossary terms.
Kevin here. Thank you for spending the past five days with us. Its been a blast seeing your comments and creativity. (I especially enjoyed the commenter who used ChatGPT to write a cover letter for my job.)
The topic of A.I. is so big, and fast-moving, that even five newsletters isnt enough to cover everything. If you want to dive deeper, you can check out my book, Futureproof, and Cades book, Genius Makers, both of which go into greater detail about the topics weve covered this week.
Cade here: My favorite comment came from someone who asked ChatGPT to plan a route through the trails in their state. The bot ended up suggesting a trail that did not exist as a way of hiking between two other trails that do.
This small snafu provides a window into both the power and the limitations of todays chatbots and other A.I. systems. They have learned a great deal from what is posted to the internet and can make use of what they have learned in remarkable ways, but there is always the risk that they will insert information that is plausible but untrue. Go forth! Chat with these bots! But trust your own judgment too!
Please take this brief survey to share your thoughts and feedback on this limited-run newsletter.
See the original post here:
The Future of AI: What Comes Next and What to Expect - The New York Times
Hypnotic Trailer: Ben Affleck Stars in Mind-Bending Action Thriller From Robert Rodriguez – Variety
After a peek at director Robert Rodriguezs action thriller Hypnotic at this years SXSW, audiences wont have to wait much longer to catch the full version in theaters this spring.
IGN has unveiled the first official trailer for the upcoming psychological thriller, which follows police detective Daniel Rourke (Ben Affleck) as he searches for his missing daughter Minnie (Hala Finley). He soon learns she is associated with a series of ongoing robberies conducted by a mysterious man (William Fichtner) with hypnotic powers.
The trailer gives audiences a look at how Afflecks frantic search for his missing daughter. The desperate dad slowly begins to spiral out of control once his investigation pushes him to confront his deepest, darkest fears. With assistance from psychic Diana Cruz (Alice Braga), Daniel sets off to pursue the mysterious man during his train of robberies, and get Minnie home safe. As he finds out in the trailer, people with hypnotic powers can force their victims to see and feel things that arent real.
Affleck, Finley, Fichtner and Braga are joined by Jeff Fahey, Kelly Frye, JD Pardo, Bonnie Discepolo, Dayo Okeniyi, Derek Russo and Corina Calderon.
Following its SXSW premiere, Hypnotic received favorable reviews, with Varietys Peter Debruge writing that the typical popcorn-munching multiplex patron would never suspect how deep this Russian-doll mystery goes. Better to strap in and go along for the ride in the latest example of creativity-within-constraints.
Watch the Hypnotic trailer below. The film is set to debut in theaters on May 12.
The rest is here:
Hypnotic Trailer: Ben Affleck Stars in Mind-Bending Action Thriller From Robert Rodriguez - Variety