Category Archives: Ai

This Week in AI: A Battle for Humanity or Profits? – PYMNTS.com

Theres some in-fighting going on in the artificial intelligence (AI) world, and one prominent billionaire claims the future of the human race is at stake. Elon Musk is taking legal action against Microsoft-backed OpenAI and its CEO, Sam Altman, alleging the company has strayed from its original mission to develop artificial intelligence for the collective benefit of humanity.

Musks attorneys filed a lawsuit on Thursday (Feb. 29) in San Francisco, asserting that in 2015, Altman and Greg Brockman, co-founders of OpenAI, approached Musk to assist in establishing a nonprofit focused on advancing artificial general intelligence for the betterment of humanity.

Although Musk helped initiate OpenAI in 2015, he departed from its board in 2018. Previously, in 2014, he had voiced concerns about the risks associated with AI, suggesting it could pose more significant dangers than nuclear weapons.

The lawsuit highlights that OpenAI, Inc. still claims on its website to prioritize ensuring that artificial general intelligence benefits all of humanity. However, the suit contends that in reality, OpenAI, Inc. has evolved into a closed-source entity effectively operating as a subsidiary of Microsoft, the worlds largest technology company.

When it comes to cybersecurity, AI brings both risks and rewards. Google CEO Sundar Pichai and other industry leaders say artificial intelligence is key to enhancing online security. AI can accelerate and streamline the management of cyber threats. It leverages vast datasets to identify patterns, automating early incident analysis and enabling security teams to quickly gain a comprehensive view of threats, thus hastening their response.

Lenovo CTO Timothy E. Bates told PYMNTS that AI-driven tools, such as machine learning for anomaly detection and AI platforms for threat intelligence, are pivotal. Deep learning technologies dissect malware to decipher its composition and potentially deconstruct attacks. These AI systems operate behind the scenes, learning from attacks to bolster defense and neutralize future threats.

With the global shift toward a connected economy, cybercrime is escalating, causing significant financial losses, including an estimated $10.3 billion in the U.S. alone in 2022, according to the FBI.

Get set for lots more books that are authored or co-authored by AI. Inkitt, a startup leveraging artificial intelligence (AI) to craft books, has secured $37 million. Inkitts app enables users to self-publish their narratives. By employing AI and data analytics, it selects stories for further development and markets them on its Galatea app.

This technological shift offers both opportunities and challenges.

Zachary Weiner, CEO of Emerging Insider Communications, which focuses on publishing, shared his insights on the impact of AI on writing with PYMNTS. Writers gain significantly from the vast new toolkit AI provides, enhancing their creative process with AI-generated prompts and streamlining tasks like proofreading. AI helps them overcome traditional brainstorming limits, allowing for the fusion of ideas into more intricate narratives. It simplifies refining their work, letting them concentrate on their primary tasks.

But he warns of the pitfalls AI introduces to the publishing world. AI is making its way into all aspects of writing and content creation, posing a threat to editorial roles, he said. The trend towards replacing human writers with AI for cost reduction and efficiency gains is not just a possibility but a current reality.

The robots are coming, and they are getting smarter. New advancements in artificial intelligence (AI) are making it possible for companies to create robots with better features and improved abilities to interact with humans.

Figure AI has raised $675 million to develop AI-powered humanoid robots. Investors include Jeff Bezos Explore Investments and tech giants like Microsoft, Amazon, Nvidia, OpenAI, and Intel. Experts say this investment shows a growing interest in robotics because of AI.

According to Sarah Sebo, an assistant professor of computer science at the University of Chicago, AI can help robots understand their surroundings better, recognize objects and people more accurately, communicate more naturally with humans and improve their abilities over time through feedback.

Last March, Figure AI introduced the Figure 01 robot, designed for various tasks, from industrial work to household chores. Equipped with AI, this robot mimics human movements and interactions.

The company hopes these robots will take on risky or repetitive tasks, allowing humans to focus on more creative work.

Read the rest here:

This Week in AI: A Battle for Humanity or Profits? - PYMNTS.com

Introducing Mistral-Large on Azure in partnership with Mistral AI – Microsoft

The AI industry is undergoing a significant transformation with growing interest in more efficient and cost-effective models, emblematic of a broader trend in technological advancement. In the vanguard is Mistral AI, an innovator and trailblazer. Their commitment to fostering the open-source community and achieving exceptional performance aligns harmoniously with Microsofts commitment to develop trustworthy, scalable, and responsible AI solutions.

Today, we are announcing a multi-year partnership between Microsoft and Mistral AI, a recognized leader in generative artificial intelligence. Both companies are fueled by a steadfast dedication to innovation and practical applications, bridging the gap between pioneering research and real-world solutions.

Introducing Mistral Large, our most advanced large language model (LLM)

This partnership with Microsoft enables Mistral AI with access to Azures cutting-edge AI infrastructure, to accelerate the development and deployment of their next generation large language models (LLMs) and represents an opportunity for Mistral AI to unlock new commercial opportunities, expand to global markets, and foster ongoing research collaboration.

We are thrilled to embark on this partnership with Microsoft. With Azures cutting-edge AI infrastructure, we are reaching a new milestone in our expansion propelling our innovative research and practical applications to new customers everywhere. Together, we are committed to driving impactful progress in the AI industry and delivering unparalleled value to our customers and partners globally.

Microsofts partnership with Mistral AI is focused on three core areas:

In November 2023, at Microsoft Ignite, Microsoft unveiled the integration of Mistral 7B into the Azure AI model catalog accessible through Azure AI Studio and Azure Machine Learning. We are excited to announce Mistral AIs flagship commercial model, Mistral Large, available first on Azure AI and the Mistral AI platform, marking a noteworthy expansion of our offerings. Mistral Large is a general-purpose language model that can deliver on any text-based use case thanks to state-of-the-art reasoning and knowledge capabilities. It is proficient in code and mathematics, able to process dozens of documents in a single call, and handles French, German, Spanish, and Italian (in addition to English).

This latest addition of Mistral AIs premium models into Models as a Service (MaaS) within Azure AI Studio and Azure Machine Learning provides Microsoft customers with a diverse selection of the best state-of-the-art and open-source models for crafting and deploying custom AI applications, paving the way for novel AI-driven innovations.

We have tested Mistral Large through the Azure AI Studio in a use case aimed at internal efficiency. The performance was comparable with state-of-the-art models with even better latency. We are looking forward to exploring further this technology in our business.

After exploring Mistral Large during its early access period, weve been impressed by its performance on medical terminology. As we continue to innovate in healthcare, were open to collaborations that can help us and our partners grow together. Mistral AI represents an exciting opportunity for mutual advancement in artificial intelligence, both in France and internationally.

The Mistral AI models have been crucial in enhancing productivity and collaboration at CMA CGM. Their advanced capabilities have significantly improved the performance of our internal personal assistant, MAIA. Employees are now able to quickly access and engage with information like never before. We are confident that Mistral AI on Azure is the right choice to support our employees and drive innovation across our organization.

Microsoft is committed to supporting global AI innovation and growth, offering world-class datacenter AI infrastructure, and developing technology securely to empower individuals with the skills they need to leverage AI effectively. This partnership with Mistral AI is founded on a shared commitment to build trustworthy and safe AI systems and products. It further reinforces Microsofts ongoing efforts to enhance our AI offerings and deliver unparalleled value to our customers. Additionally, the integration into AI Studio ensures that customers can utilize Azure AI Content Safety and responsible AI tools, further enhancing the security and reliability of AI solutions.

Visit the Mistral Large model card and sign in with your Azure subscription to get started with Mistral Large on Azure AI today. You can also review the technical blog to learn how to use Mistral Large on Azure AI. Visit Mistral AIs blog to get deeper insights about the model.

Build intelligent apps at enterprise scale with the Azure AI portfolio

The rest is here:

Introducing Mistral-Large on Azure in partnership with Mistral AI - Microsoft

China’s Rush to Dominate A.I. Comes With a Twist: It Depends on U.S. Technology – The New York Times

In November, a year after ChatGPTs release, a relatively unknown Chinese start-up leaped to the top of a leaderboard that judged the abilities of open-source artificial intelligence systems.

The Chinese firm, 01.AI, was only eight months old but had deep-pocketed backers and a $1 billion valuation and was founded by a well-known investor and technologist, Kai-Fu Lee. In interviews, Mr. Lee presented his A.I. system as an alternative to options like Metas generative A.I. model, called LLaMA.

There was just one twist: Some of the technology in 01.AIs system came from LLaMA. Mr. Lees start-up then built on Metas technology, training its system with new data to make it more powerful.

The situation is emblematic of a reality that many in China openly admit. Even as the country races to build generative A.I., Chinese companies are relying almost entirely on underlying systems from the United States. China now lags the United States in generative A.I. by at least a year and may be falling further behind, according to more than a dozen tech industry insiders and leading engineers, setting the stage for a new phase in the cutthroat technological competition between the two nations that some have likened to a cold war.

Chinese companies are under tremendous pressure to keep abreast of U.S. innovations, said Chris Nicholson, an investor with the venture capital firm Page One Ventures who focuses on A.I. technologies. The release of ChatGPT was yet another Sputnik moment that China felt it had to respond to.

Jenny Xiao, a partner at Leonis Capital, an investment firm that focuses on A.I.-powered companies, said the A.I. models that Chinese companies build from scratch arent very good, leading to many Chinese firms often using fine-tuned versions of Western models. She estimated China was two to three years behind the United States in generative A.I. developments.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit andlog intoyour Times account, orsubscribefor all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?Log in.

Want all of The Times?Subscribe.

See the original post here:

China's Rush to Dominate A.I. Comes With a Twist: It Depends on U.S. Technology - The New York Times

Join us in 2024 events to get your teams AI-ready – The Official Microsoft Blog – Microsoft

2023 was a year of exciting growth and innovation here at Microsoft. This years focus is to empower our customers and partners through AI transformation, and were excited to share what will be an impactful lineup of events for 2024. Attending any of these events provides you with the opportunity to learn, grow and make defining connections with experts from around the world.

Expect to see enhancements in some of this years events. Azure AI-powered natural language assistants will provide personalized session recommendations, summarize content and answer your event-related questions. To meet the needs of a global audience, we are also offering options to participate in person, online and on-demand so you can choose what format works best for you. We structure these events to support the goals of our audiences and ensure that anyone attending has a great experience.

Visit our events site to find out which ones are right for you.

Its not too late to register for this ongoing series of one-day, in-person experiences around the world. These events bring together those on the cutting edge of innovation including decision-makers, industry experts, thought leaders and developers to focus on how AI will revolutionize work. So far, weve welcomed thousands of senior leaders and developers in six locations around the globe with keynotes highlighting the latest innovations in AI. Tour stops remain in Berlin, Paris, So Paulo and Seoul, where you can attend interactive workshops and learn how you can unlock the power of AI. Go tothe Microsoft AI Tour site to sign up.

In-demand experts, distinguished engineers and developers are gathering in Seattle for our annual Microsoft Build. This celebration of technology is a chance to hear the latest announcements and get hands-on with new technology. Learn how to create new features and opportunities with AI and copilots, dive deep into the latest tech, and develop the skills that are needed for tomorrow today.

Seattle, Washington & online | May 21-23, 2024

We are evolving the event previously known as Microsoft Inspire. In July, we will kick off our fiscal year with partners in tandem with our Microsoft sellers by providing a digital engagement to share strategic priorities, investments and key program changes. We look forward to sharing more details soon. Online | July 2024

With this change, we will also welcome partners to join us at Microsoft Ignite for an in-person experience in November to see the latest Microsoft innovations, network and celebrate the Partner of the Year Award winners.

Our biggest event of the year is getting even bigger for customers and partners, and were returning to Chicago!

Join IT professionals, implementers, developers, architects and more in checking out the latest tech Microsoft has to offer. With demos and firsthand access to new AI solutions and copilots, this is your chance to explore the latest tools, receive deep technical training and get questions answered by Microsoft experts. Were bringing the best of our customer and partner event experiences to the Windy City and online so you can participate in the festivities and discover how AI can enhance your organization.

In addition to seeing the latest technology firsthand, senior leaders and decision-makers are invited to learn more about how to lead in the era of AI and find robust networking opportunities.

If youre looking to expand your AI knowledge, create connections and push the boundaries of what we can accomplish together, theres no better event than Microsoft Ignite.

Chicago, Illinois & online | Nov. 18-22, 2024

Find an event in your region

Visit our full global events catalog for a complete list of events, including some that could even be in your area. There, you can filter events by product, role or industry to find something specific to your needs or interests.

We hope to see you

Its very exciting to bring you opportunities that showcase the growth and innovation thats being done at Microsoft to help you do more with AI. Whether youre a customer, partner, IT professional, decision-maker or developer, if youre looking to achieve more theres an event for you.

Tags: Microsoft AI Tour, Microsoft Build, Microsoft Ignite, Microsoft Inspire

See the article here:

Join us in 2024 events to get your teams AI-ready - The Official Microsoft Blog - Microsoft

AI meets Old Masters in the fight to authenticate paintings – Financial Times

Unlock the Editors Digest for free

Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.

Artificial intelligence is making its mark on the art world, encroaching even on fustier areas such as the Old Masters trade. AI will, for instance, be a talking point during the Tefaf art and antiques fair in Maastricht next week: Carina Popovici, chief executive of the Swiss-based AI company Art Recognition,will reveal at the accompanying Art Business Conference how it recently attributed a paintingto a Renaissance German artist.

Art Recognition, which was founded five years ago, has an AI system which, it says, offers a precise and objective authenticity evaluation of an artwork. On its website, the company says it has completed more than 500 authenticity evaluations, verifying contested works such as an 1889 self-portrait by Vincent van Gogh at the National Museum in Oslo.

Attributions matter in the art world: confirming the authorship of a work can increase the price if the artist is a star name, and can also boost scholarship in the field. The Adoration of the Kings,offered at auction in 2021 with an estimate of 10,500-16,000 as Circle of Rembrandt, was later attributed to the Dutch master himself and sold for 10.9mn with fees at Sothebys in December. And it is not only collectors and dealers who want queries settled: Popovici says Art Recognition is used by wealth management services and legal professionals as well. Christies says it is watching developments in the area of AI with interest

AI is excellent at so-called pattern recognition, says Jo Lawson-Tancred, author of the forthcoming publication AI and the Art Market, so it will have an easier time than humans learning distinguishing features if shown enough examples by a particular artist. It can usually flag any paintings that do not fit an artists pattern but AI does not excel at grasping context, so human reasoning is still integral, she adds.

I think that a lot depends on the data that is fed into the AI system, says Carlo Milano of Callisto Fine Arts, London. For example, if a questionable catalogue raisonn is used to input data about an artist, then the conclusions can be questionable. The work of an art dealer involves extensive psychology, he explains, outlining that AI will provide more information and reduce the margin of error, but will never completely replace hands-on experience.

Conservators are concerned whether AI can take into account factors such as a filthy layer of varnish, wear or damage. Art professionals are indeed largely sceptical about whether AI will ever supplement or replace the human eye in judging a work of art.

Art Recognition was caught up in a row last year over a painting known as thede Brcy Tondo, believed to be by the Renaissance master Raphael. In January 2023, an analysis by two UK universities (Bradford and Nottingham), using AI-assisted facial-recognition software, concluded that the faces in the work were identical to those in another Raphael painting, theSistine Madonna(c1513), thus claiming that thede Brcy Tondo was by the master. However, Art Recognition also analysed the piece, by contrast stating that de Brcy Tondo is not by Raphael, with an 85 per cent probability rating.

The Raphael dispute has since broadened, highlighting the strengths and weaknesses of AI authentication different programs can produce different results. In December 2023, a team led by scientists from the University of Bradford presented further findings about the art of Raphael in a peer-reviewed paper published in the Heritage Science journal. Their program compared the details of authentic Raphael paintings from the database with the test image, examining in depth the colour palette, tonal values, hues and brushstroke patterns.

They concluded that the face of Joseph in the artists work Madonna della rosa, housed at the Prado in Madrid, may not be by the Renaissance artist. The rest of the work is by his hand, say the university specialists, who include Prof Hassan Ugail, director of Bradfords Centre for Visual Computing. Ugail says his most recent algorithm recognises authentic works by Raphael with 98 per cent accuracy.

But Popovici challenges Ugails findings after investigating the training data made publicly available by the university research group. There are no negative examples of Madonna paintings works not by Raphael but resembling his style used by the group in its recent Raphael data set, she tells the Financial Times. The validity and scope of the data sets used in such AI programs the material the software is working from are crucial.

Like an expert learning from examples, says Popovici, an AIs ability to recognise patterns and make assessments depends heavily on how representative its training data is, she says. Without exposure to both authentic and imitative examples of a theme, an AI is inclined to classify as authentic those works that resemble the images in its positive training set, she adds.

Ugail says that there are other ways to train an AI program than the one Popovici suggests, but strikes a conciliatory note, addressing wider concerns about the impact of AI. This is not a case of AI taking peoples jobs, he says, adding that the process of authenticating a work of art involves looking at many aspects, from its provenance to the pigments used. Just like spectroscopy and dating techniques, AI can be one important tool in the main toolbox, he says.

The art historian Bendor Grosvenor says that AI can be useful for connoisseurs. But the main drawback at the moment is the quality of the inputs given to the AI attribution programs currently being used. It is simply not possible to determine whether a painting is by Rubens by relying only on poor-quality images of not much more than half his oeuvre...No human connoisseur would be trusted to do so neither can a computer.

His conclusion? Must do better is the report card on AI in this field so far.

Read the original here:

AI meets Old Masters in the fight to authenticate paintings - Financial Times

The Humane AI Pin worked better than I expected until it didn’t – The Verge

Look, Im a Humane AI Pin doubter as much as the next person. And I still think the wearable, AI-powered assistant suffers from a case of this-thing-could-have-been-an-app. But I finally got to spend a little face-to-face time with the pin this morning, and you know what? Its a darn cool gadget. Its just buried under a layer of marketing so thick that its hard to appreciate what it actually could be if Humane wasnt so self-serious.

If you spend time on Tech Threads or the like, you probably already know what the pin does: you clip it to your shirt, talk to it, and it uses generative AI to answer. Its a standalone device with its own SIM card, and theres no screen just vibes. That, and a little laser that projects menus and text onto your palm so you can interact with mortal trifles like Wi-Fi settings and media playback controls.

The idea, reiterated as I watched a couple of Humane employees run through various demos, was that its meant to help keep you connected while unplugging a little bit less staring at screens and more living in the moment. AI helps fetch relevant bits from your calendar and email, and answers your questions when youre curious about the world around you.

Its all very lovely, but lets be real: this thing isnt a philosophy, its a gadget. Gadgets are fun, helpful, and frustrating and all of the above seems to apply to the Humane pin.

The AI Pin was genuinely impressive at times. Theres a vision feature that will use the camera to scan the scene in front of you when prompted, analyze whats there, and describe it out loud. I stood in front of a Humane spokesperson as he tried out this feature, and frankly, the pin nailed it. It described Mobile World Congress as an indoor event or exhibition with people walking around. Easy enough.

But it also pointed out the name Qualcomm on the signage behind me, and obviously reading the badge around my neck, identified me as a person wearing a lanyard from the The Verge. One too many thes, but pretty impressive when you consider I wasnt standing all that close to the pin and the lighting was dim.

The gesture navigation was also impressive more fluid and responsive than I thought it would be. I wasnt allowed to put the pin on myself, and its hard to get into the right spot to project the laser onto your own hand since its really a single-user device. I tried. But a couple of Humane employees demoing the product, who obviously had lots of practice with it, navigated the projected menus quickly and easily just by tilting their hands and tapping two fingers together.

But the pin isnt immune to the thing that gadgets often do: frustrate the hell out of you. Most of the AI is off-device, so theres a solid few seconds of waiting for responses to your requests and questions not helped by the convention centers spotty connectivity. It also shut down on one occasion after briefly flashing a notice that it had overheated and needed to cool off. The employee demoing the pin for me said that this doesnt happen very often, and that the continued use of the laser for demonstration purposes probably did it. I believe that, but still, this is a device meant to sit next to your chest and go with you into lots of different environments, presumably including warm ones. Not great!

The laser projection is clearer than I imagined it would be, but its still essentially light projected onto the palm of your hand. Hands arent uniformly flat, and theyre hard to keep perfectly still. Text kind of dances around in front of you, and while its not difficult to read, it is harder than reading, say, text on a smartphone.

Its also impossible to get a sense of what its like living with the thing in a convention center hall. Could a cotton shirt support its weight? How easy is the laser to see outside in direct sun? Would people understand why the trust light is illuminated? Does the Pin occasionally make things up, the way some AI tends to? I have a lot more questions than answers, but I guess at least I have more than zero answers now that Ive seen it with my own eyes.

My early impression of the Pin is that theres something there, but its not the thing. And the trouble is, all of Humanes marketing has built it up to be the thing. It was first introduced at a TED talk, for petes sake: thats like ground zero for people who take themselves too seriously. Humanes Sai Kambampati told me that the AI Pin isnt intended as a smartphone replacement. But it has its own data connection, its own monthly subscription fee, and its own smartphone-esque price of $699. And its... not supposed to replace your phone?

Whatevers ahead of us in mobile computing, I have a feeling its not exactly the AI Pin as I saw it demonstrated today. Theres a lot more testing I want to do when the pin officially arrives in April. In the meantime, I didnt see the future exactly, but I did see a darn cool gadget just dont take it too seriously.

Photography by Allison Johnson / The Verge

Read the original here:

The Humane AI Pin worked better than I expected until it didn't - The Verge

Is ChatGPT making scientists hyper-productive? The highs and lows of using AI – Nature.com

In a 2023 Nature survey of scientists, 30% of respondents had used generative AI tools to help write manuscripts.Credit: Nicolas Maeterlinck/Belga MAG/AFP via Getty

ChatGPT continues to steal the spotlight, more than a year after its public debut.

The artificial intelligence (AI) chatbot was released as a free-to-use tool in November 2022 by tech company OpenAI in San Francisco, California. Two months later, ChatGPT had already been listed as an author on a handful of research papers.

ChatGPT one year on: who is using it, how and why?

Academic publishers scrambled to announce policies on the use of ChatGPT and other large language models (LLMs) in the writing process. By last October, 87 of 100 top scientific journals had provided guidance to authors on generative AI, which can create text, images and other content, researchers reported on 31 January in the The BMJ1.

But thats not the only way in which ChatGPT and other LLMs have begun to change scientific writing. In academias competitive environment, any tool that allows researchers to produce more publications is going to be a very attractive proposition, says digital-innovation researcher Savvas Papagiannidis at Newcastle University in Newcastle upon Tyne, UK.

Generative AI is continuing to improve so publishers, grant-funding agencies and scientists must consider what constitutes ethical use of LLMs, and what over-reliance on these tools says about a research landscape that encourages hyper-productivity.

Before its public release, ChatGPT was not nearly as user-friendly as it is today, says computer scientist Debora Weber-Wulff at the HTW Berlin University of Applied Sciences. The interfaces for the older GPT models were something that only a computer scientist could love.

In the past, researchers typically needed specialized expertise to use advanced LLMs. Now, GPT has democratized that to some degree, says Papagiannidis.

This democratization has catalysed the use of LLMs in research writing. In a 2023 Nature survey of more than 1,600 scientists, almost 30% said that they had used generative AI tools to help write manuscripts, and about 15% said they had used them to help write grant applications.

And LLMs have many other uses. They can help scientists to write code, brainstorm research ideas and conduct literature reviews. LLMs from other developers are improving as well, such as Googles Gemini and Claude 2 by Anthropic, an AI company in San Francisco. Researchers with the right skills can even develop their own personalized LLMs that are fine-tuned to their writing style and scientific field, says Thomas Lancaster, a computer scientist at Imperial College London.

About 55% of the respondents to the Nature survey felt that a major benefit of generative AI is its ability to edit and translate writing for researchers whose first language is not English. Similarly, in a poll by the European Research Council (ERC), which funds research in Europe, 75% of more than 1,000 ERC-grant recipients felt that generative AI will reduce language barriers in research by 2030, according to a report released in December2.

Of the ERC survey respondents, 85% thought that generative AI could take on repetitive or labour-intensive tasks, such as literature reviews. And 38% felt that generative AI will promote productivity in science, such as by helping researchers to write papers at a faster pace.

Although ChatGPTs output can be convincingly human-like, Weber-Wulff warns that LLMs can still make language mistakes that readers might notice. Thats one of the reasons she advocates for researchers to acknowledge LLM use in their papers. Chatbots are also notorious for generating fabricated information, called hallucinations.

How ChatGPT and other AI tools could disrupt scientific publishing

And there is a drawback to the productivity boost that LLMs might bring. Speeding up the paper-writing process could increase throughput at journals, potentially stretching editors and peer reviewers even thinner than they already are. With this ever-increasing number of papers because the numbers are going up every year there just arent enough people available to continue to do free peer review for publishers, Lancaster says. He points out that alongside researchers who openly use LLMs and acknowledge it, some quietly use the tools to churn out low-value research.

Its already difficult to sift through the sea of published papers to find meaningful research, Papagiannidis says. If ChatGPT and other LLMs increase output, this will prove even more challenging.

We have to go back and look at what the reward system is in academia, Weber-Wulff says. The current publish or perish model rewards researchers for constantly pushing out papers. But many people argue that this needs to shift towards a system that prioritizes quality over quantity. For example, Weber-Wulff says, the German Research Foundation allows grant applicants to include only ten publications in a proposal. You want to focus your work on getting really good, high-level papers, she says.

According to the study in The BMJ, 24 of the 100 largest publishers collectively responsible for more than 28,000 journals had by last October provided guidance on generative AI1. Journals with generative-AI policies tend to allow some use of ChatGPT and other LLMs, as long as theyre properly acknowledged.

Springer Nature, for example, states that LLM use should be documented in the methods or another section of the manuscript, a guideline introduced in January 2023. Generative AI tools do not, however, satisfy criteria for authorship, because that carries with it accountability for the work, and AI tools cannot take such responsibility. (Natures news team is editorially independent of its publisher, Springer Nature.)

Enforcing these rules is easier said than done, because undisclosed AI-generated text can be difficult for publishers and peer reviewers to spot. Some sleuths have caught it through subtle phrases and mistranslations. Unlike cases of plagiarism, in which there is clear source material, you cant prove that anything was written by AI, Weber-Wulff says. Despite researchers racing to create LLM-detection tools, we havent seen one that we thought produced a compelling enough result to screen journal submissions, says Holden Thorp, editor in chief of the Science family of journals.

Although as of November, the American Association for the Advancement of Science which publishes Science allows for some disclosed use of generative AI in the preparation of manuscripts, it still bans the use of LLMs during peer review, Thorp says. This is because he and others at Science want reviewers to devote their full attention to the manuscript being assessed, he adds. Similarly, Springer Natures policy prohibits peer reviewers from uploading manuscripts into generative-AI tools.

Scientific sleuths spot dishonest ChatGPT use in papers

Some grant-funding agencies, including the US National Institutes of Health and the Australian Research Council, forbid reviewers from using generative AI to help examine grant applications because of concerns about confidentiality (grant proposals are treated as confidential documents, and the data entered into public LLMs could be accessed by other people). But the ERC Scientific Council, which governs the ERC, released a statement in December recognizing that researchers use AI technologies, along with other forms of external help, to prepare grant proposals. It said that, in these cases, authors must still take full responsibility for their work.

Many organizations come out now with very defensive statements requiring authors to acknowledge all use of generative AI, says ERC Scientific Council member Tom Henzinger, a computer scientist at the Institute of Science and Technology Austria in Klosterneuburg.

To him, ChatGPT seems no different from running text by a colleague for feedback. Use every resource at your disposal, Henzinger says.

Regardless of the ever-changing rules around generative AI, researchers will continue to use it, Lancaster says. There is no way of policing the use of technology like ChatGPT.

Original post:

Is ChatGPT making scientists hyper-productive? The highs and lows of using AI - Nature.com

AI could make the four-day workweek inevitable – BBC.com

By Elizabeth BennettFeatures correspondent

As artificial intelligence gains traction in office operations, some companies are giving employees a day to step back.

Working four days while getting paid for five is a dream for many employees. Yet the dramatic shifts in the pandemic-era workplace have turned this once unfathomable idea into a reality for some workers. And as more global data emerges, an increasing number of companies are courting the approach after positive trial-run results across countries including the UK, Iceland, Portugal and more.

Now, as pilots continue in Germany, a trial of 45 companies has just begun , for instance another factor has entered the mix. Artificial intelligence (AI) is gathering pace in the workplace, and some experts believe it could accelerate the adoption of the four-day workweek.

Data from London-based news-and-events resource Tech.co collected in late 2023 lends credence to this idea. For their 2024 Impact of Technology on the Workplace, the company surveyed more than 1,000 US business leaders. The researchers found 29% of organisations with four-day workweeks use AI extensively in their firms' operations, implementing generative AI tools such as ChatGPT as well as other programmes to streamline operations. In comparison, only 8% of five-day working week organisations use AI to this extent. And 93% of businesses using AI are open to a four-day work week, whereas for those who don't, fewer than half are open to working shorter weeks.

At London-based digital design agency Driftime, adopting AI technology has been crucial to enable the business to operate a flexible four-day work week. "By handing over simple tasks to AI tools, we gain invaluable time previously lost to slow aspects of the process," says co-founder Abb-d Taiyo. "With tools like Modyfi, the graphics are all live and modifiable, making it so much easier and quicker for our designers to create concepts and ideas."

Taiyo believes it makes sense for both his employees and his bottom line to work the condensed week. "Instead of a dip in the quantity of work created over just four days, we've seen a remarkably high quality of work matched by a high staff satisfaction return. The health and happiness of our team is in direct correlation to the high standard of work produced," he says.

Shayne Simpson, group managing director of UK-based TechNET IT Recruitment, also believes AI has been fundamental to the success of the company's four-day work week policy. The firm has found AI tools save each of their recruitment consultants 21 hours per week, primarily by automating previously manual tasks like data input, confirmation emails, resume screening and candidate outreach. This has reduced the time to fill permanent roles at the company by an average of 10 days. "This timesaving allows our team to achieve their weekly goals earlier in the week and the flexibility liberates our consultants from being tethered to their desks, enabling them to enjoy a well-deserved Friday off," says Simpson.

Not only has the company's abridged workweek boosted productivity and morale, Simpson says it's also been key to attracting talent to work within the company itself. "Seasoned recruitment professionals are enticed by our streamlined processes while entry-level talent is eager to embrace new tools." It's lifted the entire business, he adds.

While AI tools are certainly paving the way for a four-day work week within some industries, the technology can't usher in the change alone. Organisational culture within a business is also fundamental, says Na Fu, a professor in human resource management at Trinity Business School, Ireland. "An openness to innovative work structures, an experimental mindset and, importantly, a culture grounded in high levels of trust are all important for the four-day work week to be successfully adopted," she says.

As the digital transformation with AI progresses, employees themselves also must be willing to level up, she adds: "Rather than becoming mere caretakers or servants of machines, human workers need to develop new skills that can leverage, complement and lead AI, achieving the enhanced outcomes."

Some industries will benefit from AI more than others, however notably those who are able to use generative AI tools for such tasks including software development, content creation, marketing and legal services, says Fu. Plus, artificial intelligence development still has a way to go if it is to substantially reduce human working hours across the board.

What may drive the shift to a four-day workweek in an AI-powered business landscape may not ultimately be up to the robots, however. Executive buy-in is required, and whether leaders will embrace the unconventional concept will vary depending on a firm's overarching purpose and values, says Fu. Instead of letting AI supplement the work of humans, for instance, some businesses could use it to automate certain tasks while piling other work on employees to fill newly open hours.

Still, despite some reservation, an increasing number of business leaders including those from some of the world's highest-earning companies see a technology-driven shortened workweek as an inevitable future. In October 2023, JPMorgan Chase & Co CEO Jamie Dimon told Bloomberg TV: "Your children are going to live to 100, and they'll probably be working three-and-a-half days a week." Employees will have to wait and see.

Here is the original post:

AI could make the four-day workweek inevitable - BBC.com

Google CEO: Gemini AI photo diversity scandal offended our users – The Verge

Hi everyone

I want to address the recent issues with problematic text and image responses in the Gemini app (formerly Bard). I know that some of its responses have offended our users and shown bias to be clear, thats completely unacceptable and we got it wrong.

Our teams have been working around the clock to address these issues. Were already seeing a substantial improvement on a wide range of prompts. No Al is perfect, especially at this emerging stage of the industrys development, but we know the bar is high for us and we will keep at it for however long it takes. And well review what happened and make sure we fix it at scale.

Our mission to organize the worlds information and make it universally accessible and useful is sacrosanct. Weve always sought to give users helpful, accurate, and unbiased information in our products. Thats why people trust them. This has to be our approach for all our products, including our emerging Al products.

Well be driving a clear set of actions, including structural changes, updated product guidelines, improved launch processes, robust evals and red-teaming, and technical recommendations. We are looking across all of this and will make the necessary changes.

Even as we learn from what went wrong here, we should also build on the product and technical announcements weve made in Al over the last several weeks. That includes some foundational advances in our underlying models e.g. our 1 million long-context window breakthrough and our open models, both of which have been well received.

We know what it takes to create great products that are used and beloved by billions of people and businesses, and with our infrastructure and research expertise we have an incredible springboard for the Al wave. Lets focus on what matters most: building helpful products that are deserving of our users trust.

See the rest here:

Google CEO: Gemini AI photo diversity scandal offended our users - The Verge

New AI image generator is 8 times faster than OpenAI’s best tool and can run on cheap computers – Livescience.com

A new artificial intelligence (AI) tool can generate images in under two seconds and it doesn't need expensive hardware to run.

South Korean scientists have used a special technique called knowledge distillation to compress the size of an open source (or publicly available) image generation model known as Stable Diffusion XL which has 2.56 billion parameters, or variables the AI uses to learn during training.

The smallest version of the new model, known as "KOALA", has just 700 million parameters, meaning it's lean enough to run quickly and without needing expensive and energy-intensive hardware.

Related: AI chatbots need to be much better at remembering things. Have scientists just cracked their terrible memory problem?

The method they used, knowledge distillation, transfers knowledge from a large model to a smaller one, ideally without compromising performance. The benefit of a smaller model is that it takes less time to perform computations and generate an answer.

The tool can run on low-cost graphics processing units (GPUs) and needs roughly 8GB of RAM to process requests versus larger models, which need high-end industrial GPUs.

The team published their findings in a paper Dec. 7, 2023 to the preprint database arXiv. They have also made their work available via the open source AI repository Hugging Face.

The Electronics and Telecommunication Research Institute (ETRI), the institution behind the new models, has created five versions including three versions of the "KOALA" image generator which generates images based on text input and two versions of "Ko-LLaVA" which can answer text-based questions with images or video.

When they tested KOALA, it generated an image based on the prompt "a picture of an astronaut reading a book under the moon on Mars" in 1.6 seconds. OpenAI's DALLE 2 generated an image based on the same prompt in 12.3 seconds, and DALLE 3 generated it in 13.7 seconds, according to a statement.

The scientists now plan to integrate the technology they've developed into existing image generation services, education services, content production and other lines of business.

Read more:

New AI image generator is 8 times faster than OpenAI's best tool and can run on cheap computers - Livescience.com