Page 559«..1020..558559560561..570580..»

Crypto News Today: BTC-Spot ETF Activity and Binance Grab the Headlines – FX Empire

BlackRock (BLK) submitted an S-1 amendment for the iShares Bitcoin Trust. Significantly, BlackRock excluded In-kind creation and redemptions. BlackRock and Fidelity were among the issuers meeting with the SEC to discuss In-kind Creations. Last week, Invesco filed an updated S-1, committing to Cash Creates.

With the January 5-10 approval window approaching, other issuers also filed updated S-1 forms.

Bloomberg Intelligence ETF Analyst James Seyffart shared the updates. Wisdom Tree filed an updated S-1 for the Wisdom Tree Bitcoin Fund. ArkInvest and 21Shares also filed an updated S-1 for the ARK 21Shares Bitcoin ETF, switching to Cash Creations.

There was even a new ETF filing for a 7RCC Spot Bitcoin & Carbon Credit Futures ETF. The Fund will hold 80% BTC and 20% carbon credit futures. Considering the continued focus on Bitcoin mining and the environment, more ESG (Environment, Social, and Governance) filings are likely.

On Monday, the CFTC announced the court formalized the Binance US, Binance, and CZ settlement with the CTFC. Former Binance CEO CZ must pay $150 million, with Binance paying $2.7 billion. Binance will pay $1.35 billion to the CFTC, with a further $1.35 billion as penalty for ill-gotten transaction fees.

Significantly, Binances former Chief Compliance Officer Samuel Lim must pay a $1.5 million penalty for aiding and abetting Binances violations and engaging in activities outside of the US to willfully evade or attempt to evade US law.

In November, Binance agreed to a $4.3 billion settlement with the US Department of Justice.

The SEC is next in line after filing charges against Binance US, Binance, and CZ in the summer. Significantly, the SEC plans to pursue legal proceedings despite the $4.3 billion payout to the DoJ.

BTC sat above the 50-day and 200-day EMAs, sending bullish price signals.

A BTC break above the $42,900 resistance level would give the bulls a run at the 2023 high of $44,747. A move through the December 8 high would bring the $46,400 resistance level into view.

Focal points include BTC-spot ETF-related news and SEC activity.

However, a drop below the $42,000 handle would give the bears a run at the $41,585 support level and sub-$41,000.

The 14-Daily RSI reading, 59.47, suggests a BTC break above the $42,900 resistance level before entering overbought territory.

ETH held above the 50-day and 200-day EMAs, sending bullish price signals.

An ETH move to the $2,250 handle would bring the $2,300 resistance level into play.

However, a fall through the $2,150 handle would support a break below the $2,143 support level. A drop below the $2,143 support level would bring the 50-day EMA into play.

The 14-period Daily RSI at 53.96 indicates an ETH move to the $2,300 resistance level before entering overbought territory.

Originally posted here:

Crypto News Today: BTC-Spot ETF Activity and Binance Grab the Headlines - FX Empire

Read More..

UniSat: Revolutionizing Token Deployment on the Binance Smart Chain – Medium

In the rapidly evolving sphere of blockchain and cryptocurrency, the quest for streamlined solutions for token and NFT deployment has never been more pronounced. UniSat (https://unisat.vercel.app/) has emerged as a standout platform, offering an all-encompassing solution for deploying BRC20 NFTs and tokens. This article delves into the distinctive features that make UniSat a compelling choice, including a new feature allowing new users to claim up to $100 worth of BRC20 tokens during their onboarding process.

UniSat Overview:

UniSat positions itself as an integrated platform designed to simplify the complexities associated with creating and deploying BRC20 tokens and NFTs on the Binance Smart Chain (BSC). With a strong focus on user convenience, UniSat provides a straightforward experience for both newcomers and seasoned developers.

Key Features:

User-Friendly Interface: UniSats intuitive design ensures users, regardless of their experience levels, can navigate the deployment process seamlessly. The platform prioritizes simplicity without compromising on functionality.

Token and NFT Deployment: UniSat excels in facilitating the deployment of BRC20 tokens and NFTs. Users can craft custom tokens and NFTs, tailoring them for various purposes, be it digital art or applications in decentralized finance (DeFi).

Smart Contract Templates: Simplifying the deployment process, UniSat offers a range of pre-designed smart contract templates. This feature eliminates the need for users to delve into intricate coding, ensuring the security and reliability of smart contracts.

Gas Fee Optimization: Leveraging the Binance Smart Chains low transaction fees and swift confirmation times, UniSat enables users to deploy tokens and NFTs without the hefty gas fees associated with other blockchain networks.

Customization Options: UniSat empowers users with customization options, allowing them to tailor tokens and NFTs to specific project requirements. From supply limits to token names and attributes, UniSat provides flexibility without compromising security.

Integrated Marketplace: UniSat goes beyond deployment by featuring an integrated marketplace. This platform allows users to showcase and trade their tokens and NFTs, enhancing project visibility and providing liquidity for token holders.

Claim up to $100 for New Users: As a unique incentive, UniSat introduces a feature allowing new users to claim up to $100 worth of BRC20 tokens during their onboarding process. This promotion aims to encourage exploration and engagement with the platform, providing users with an initial boost to kickstart their projects.

Community Support and Documentation: UniSat places a premium on user support, maintaining an active community and offering comprehensive documentation. Users can access guides, tutorials, and engage with the community for issue resolution and advice.

Conclusion:

UniSat stands as a robust solution for deploying BRC20 tokens and NFTs on the Binance Smart Chain. With its user-friendly interface, smart contract templates, integrated marketplace, and the added incentive for new users to claim up to $100 worth of BRC20 tokens, UniSat positions itself as a leading choice in the competitive landscape of blockchain development. Its commitment to simplicity and functionality offers a potent tool for navigating the complexities of token deployment and fostering a vibrant community of users.

Website : https://unisat.vercel.app/

Read the original here:

UniSat: Revolutionizing Token Deployment on the Binance Smart Chain - Medium

Read More..

The 3 Most Important AI Innovations of 2023 | TIME – TIME

In many ways, 2023 was the year that people began to understand what AI really isand what it can do. It was the year that chatbots first went truly viral, and the year that governments began taking AI risk seriously. Those developments werent so much new innovations, as they were technologies and ideas taking center-stage after a long gestation period.

But there were plenty of new innovations, too. Here are three of the biggest from the past year:

Multimodality might sound like jargon, but its worth understanding what it means: its the ability of an AI system to process lots of different types of datanot just text, but also images, video, audio and more.

This year was the first time that the public gained access to powerful multimodal AI models. OpenAIs GPT-4 was the first of these, allowing users to upload images as well as text inputs. GPT-4 can see the contents of an image, which opens up all kinds of possibilities, for example asking it what to make for dinner based on a photograph of the contents of your fridge. In September, OpenAI rolled out the ability for users to interact with ChatGPT by voice as well as text.

Google DeepMinds latest model Gemini, announced in December, can also work with images and audio. A launch video shared by Google showed the model identifying a duck based on a line drawing on a post-it note. In the same video, after being shown an image of pink and blue yarn and asked what it could be used to create, Gemini generated an image of a pink and blue octopus plushie. (The marketing video appeared to show Gemini observing moving images and responding to audio commands in real time, but in a post on its website, Google said the video had been edited for brevityand that the model was being prompted using still images, not video, and text prompts, not audio, although the model does have audio capabilities.)

I think the next landmark that people will think back to, and remember, is [AI systems] going much more fully multimodal, Google DeepMind co-founder Shane Legg said on a podcast in October. Its early days in this transition, and when you start really digesting a lot of video and other things like that, these systems will start having a much more grounded understanding of the world. In an interview with TIME in November, OpenAI CEO Sam Altman said multimodality in the companys new models would be one of the key things to watch out for next year.

Read More: Sam Altman is TIME's 2023 CEO of the Year

The promise of multimodality isnt just that models become more useful. Its also that the models can be trained on abundant new sets of dataimages, video, audiothat contain more information about the world than text alone. The belief inside many top AI companies is that this new training data will translate into these models becoming more capable or powerful. It is a step on the path, many AI scientists hope, toward artificial general intelligence, the kind of system that can match human intellect, making new scientific discoveries and performing economically valuable labor.

One of the biggest unanswered questions in AI is how to align it to human values. If these systems become smarter and more powerful than humans, they could cause untold harm to our speciessome even say total extinctionunless, somehow, they are constrained by rules that put human flourishing at their center.

The process that OpenAI used to align ChatGPT (to avoid the racist and sexist behaviors of earlier models) worked wellbut it required a large amount of human labor, through a technique known as reinforcement learning with human feedback, or RLHF. Human raters would assess the AIs responses and give it the computational equivalent of a doggy treat if the response was helpful, harmless, and compliant with OpenAIs list of content rules. By rewarding the AI when it was good and punishing it when it was bad, OpenAI developed an effective and relatively harmless chatbot.

But since the RLHF process relies heavily on human labor, theres a big question mark over how scalable it is. Its expensive. Its subject to the biases or mistakes made by individual raters. It becomes more failure-prone the more complicated the list of rules is. And it looks unlikely to work for AI systems that are so powerful they begin doing things humans cant comprehend.

Constitutional AIfirst described by researchers at top AI lab Anthropic in a December 2022 papertries to address these problems, harnessing the fact that AI systems are now capable enough to understand natural language. The idea is quite simple. First, you write a constitution that lays out the values youd like your AI to follow. Then you train the AI to score responses based on how aligned they are to the constitution, and then incentivize the model to output responses that score more highly. Instead of reinforcement learning from human feedback, its reinforcement learning from AI feedback. These methods make it possible to control AI behavior more precisely and with far fewer human labels, the Anthropic researchers wrote. Constitutional AI was used to align Claude, Anthropics 2023 answer to ChatGPT. (Investors in Anthropic include Salesforce, where TIME co-chair and owner Marc Benioff is CEO.)

With constitutional AI, youre explicitly writing down the normative premises with which your model should approach the world, Jack Clark, Anthropics head of policy, told TIME in August. Then the model is training on that. There are still problems, like the difficulty of making sure the AI has understood both the letter and the spirit of the rules, (youre stacking your chips on a big, opaque AI model, Clark says,) but the technique is a promising addition to a field where new alignment strategies are few and far between.

Of course, Constitutional AI doesnt answer the question of to whose values AI should be aligned. But Anthropic is experimenting with democratizing that question. In October, the lab ran an experiment that asked a representative group of 1,000 Americans to help pick rules for a chatbot, and found that while there was some polarization, it was still possible to draft a workable constitution based on statements that the group came to a consensus on. Experiments like this could open the door to a future where ordinary people have much more of a say over how AI is governed, compared to today, when a small number of Silicon Valley executives write the rules.

One noticeable outcome of the billions of dollars pouring into AI this year has been the rapid rise of text-to-video tools. Last year, text-to-image tools had barely emerged from their infancy; now, there are several companies offering the ability to turn sentences into moving images with increasingly fine-grained levels of accuracy.

One of those companies is Runway, a Brooklyn-based AI video startup that wants to make filmmaking accessible to anybody. Its latest model, Gen-2, allows users to not just generate a video from text, but also change the style of an existing video based on a text prompt (for example, turning a shot of cereal boxes on a tabletop into a nighttime cityscape,) in a process it calls video-to-video.

Our mission is to build tools for human creativity, Runways CEO Cristobal Valenzuela told TIME in May. He acknowledges that this will have an impact on jobs in the creative industries, where AI tools are quickly making some forms of technical expertise obsolete, but he believes the world on the other side is worth the upheaval. Our vision is a world where human creativity gets amplified and enhanced, and it's less about the craft, and the budget, and the technical specifications and knowledge that you have, and more about your ideas. (Investors in Runway include Salesforce, where TIME co-chair and owner Marc Benioff is CEO.)

Another startup in the text-to-video space is Pika AI, which is reportedly being used to create millions of new videos each week. Run by two Stanford dropouts, the company launched in April but has already secured funding that values it at between $200 and $300 million, according to Forbes. Pitched not at professional filmmakers but at the general user, free tools like Pika are trying to transform the user-generated content landscape. That could happen as soon as 2024but text-to-video tools are computationally expensive, so dont be surprised if they start charging for access once the venture capital runs out.

Read more from the original source:

The 3 Most Important AI Innovations of 2023 | TIME - TIME

Read More..

Will superintelligent AI sneak up on us? New study offers reassurance – Nature.com

Some researchers think that AI could eventually achieve general intelligence, matching and even exceeding humans on most tasks.Credit: Charles Taylor/Alamy

Will an artificial intelligence (AI) superintelligence appear suddenly, or will scientists see it coming, and have a chance to warn the world? Thats a question that has received a lot of attention recently, with the rise of large language models, such as ChatGPT, which have achieved vast new abilities as their size has grown. Some findings point to emergence, a phenomenon in which AI models gain intelligence in a sharp and unpredictable way. But a recent study calls these cases mirages artefacts arising from how the systems are tested and suggests that innovative abilities instead build more gradually.

I think they did a good job of saying nothing magical has happened, says Deborah Raji, a computer scientist at the Mozilla Foundation who studies the auditing of artificial intelligence. Its a really good, solid, measurement-based critique.

The work was presented last week at the NeurIPS machine-learning conference in New Orleans.

Large language models are typically trained using huge amounts of text, or other information, whch they use to generate realistic answers by predicting what comes next. Even without explicit training, they manage to translate language, solve mathematical problems and write poetry or computer code. The bigger the model is some have more than a hundred billion tunable parameters the better it performs. Some researchers suspect that these tools will eventually achieve artificial general intelligence (AGI), matching and even exceeding humans on most tasks.

ChatGPT broke the Turing test the race is on for new ways to assess AI

The new research tested claims of emergence in several ways. In one approach, the scientists compared the abilities of four sizes of OpenAIs GPT-3 model to add up four-digit numbers. Looking at absolute accuracy, performance differed between the third and fourth size of model from nearly 0% to nearly 100%. But this trend is less extreme if the number of correctly predicted digits in the answer is considered instead. The researchers also found that they could also dampen the curve by giving the models many more test questions in this case the smaller models answer correctly some of the time.

Next, the researchers looked at the performance of Googles LaMDA language model on several tasks. The ones for which it showed a sudden jump in apparent intelligence, such as detecting irony or translating proverbs, were often multiple-choice tasks, with answers scored discretely as right or wrong. When, instead, the researchers examined the probabilities that the models placed on each answer a continuous metric signs of emergence disappeared.

Finally, the researchers turned to computer vision, a field in which there are fewer claims of emergence. They trained models to compress and then reconstruct images. By merely setting a strict threshold for correctness, they could induce apparent emergence. They were creative in the way that they designed their investigation, says Yejin Choi, a computer scientist at the University of Washington in Seattle who studies AI and common sense.

Study co-author Sanmi Koyejo, a computer scientist at Stanford University in Palo Alto, California, says that it wasnt unreasonable for people to accept the idea of emergence, given that some systems exhibit abrupt phase changes. He also notes that the study cant completely rule it out in large language models let alone in future systems but adds that "scientific study to date strongly suggests most aspects of language models are indeed predictable.

Raji is happy to see the community pay more attention to benchmarking, rather than to developing neural-network architectures. Shed like researchers to go even further and ask how well the tasks relate to real-world deployment. For example, does acing the LSAT exam for aspiring lawyers, as GPT-4 has done, mean that a model can act as a paralegal?

The work also has implications for AI safety and policy. The AGI crowd has been leveraging the emerging-capabilities claim, Raji says. Unwarranted fear could lead to stifling regulations or divert attention from more pressing risks. The models are making improvements, and those improvements are useful, she says. But theyre not approaching consciousness yet.

See the original post here:

Will superintelligent AI sneak up on us? New study offers reassurance - Nature.com

Read More..

How AI works is often a mystery that’s a problem – Nature.com

Download this episode of the Nature Podcast

Many AIs are 'black box' in nature, meaning that part of all of the underlying structure is obfuscated, either intentionally to protect proprietary information, due to the sheer complexity of the model, or both. This can be problematic in situations where people are harmed by decisions made by AI but left without recourse to challenge them.

Many researchers in search of solutions have coalesced around a concept called Explainable AI, but this too has its issues. Notably, that there is no real consensus on what it is or how it should be achieved. So how do we deal with these black boxes? In this podcast, we try to find out.

Subscribe to Nature Briefing, an unmissable daily round-up of science news, opinion and analysis free in your inbox every weekday.

Never miss an episode. Subscribe to the Nature Podcast on Apple Podcasts, Google Podcasts, Spotify or your favourite podcast app. An RSS feed for the Nature Podcast is available too.

The rest is here:

How AI works is often a mystery that's a problem - Nature.com

Read More..

The year of social media soul-searching: Twitter dies, X and Threads are born and AI gets personal – The Associated Press

We lost Twitter and got X. We tried out Bluesky and Mastodon (well, some of us did). We fretted about AI bots and teen mental health. We cocooned in private chats and scrolled endlessly as we did in years past. For social media users, 2023 was a year of beginnings and endings, with some soul-searching in between.

Heres a look back some of the biggest stories in social media in 2023 and what to watch for next year:

A little more than a year ago, Elon Musk walked into Twitter s San Francisco headquarters, fired its CEO and other top executives and began transforming the social media platform into whats now known as X.

Musk revealed the X logo in July. It quickly replaced Twitters name and its whimsical blue bird icon, online and on the companys San Francisco headquarters.

And soon we shall bid adieu to the twitter brand and, gradually, all the birds, Musk posted on the site.

Because of its public nature and because it attracted public figures, journalists and other high-profile users, Twitter always had an outsized influence on popular culture but that influence seems to be waning.

It had a lot of problems even before Musk took it over, but it was beloved brand with a clear role in the social media landscape, said Jasmine Enberg, a social media analyst at Insider Intelligence. There are still moments of Twitter magic on the platform, like when journalists took the platform to post real-time updates about the OpenAI drama, and the smaller communities on the platform remain important to many users. But the Twitter of the past 17 years is largely gone, and Xs reason for existence is murky.

Since Musks takeover, X has been bombarded by allegations of misinformation and racism, endured significant advertising losses and suffered declines in usage. It didnt help when Musk went on an expletive-ridden rant in an on-stage interview about companies that had halted spending on X. Musk asserted that advertisers that pulled out were engaging in blackmail and, using a profanity, essentially told them to get lost.

Continuing the trend of welcoming back users who had been banned by the former Twitter for hate speech or spreading misinformation, in December, Musk restored the X account of conspiracy theorist Alex Jones, pointing to an unscientific poll he posted to his followers that came out in favor of the Infowars host who repeatedly called the 2012 Sandy Hook school shooting a hoax.

LGBTQ and other organizations supporting marginalized groups, meanwhile, have been raising alarms about X becoming less safe. In April, for instance, it quietly removed a policy against the targeted misgendering or deadnaming of transgender individuals. In June, the advocacy group GLAAD called it the most dangerous platform for LGBTQ people.

GLSEN, an LGBTQ education group, announced in December that it was leaving X, joining other groups such as the suicide prevention nonprofit Trevor Project, saying that Musks changes have birthed a new platform that enables its users to harass and target the LGBTQ+ community without restriction or discipline.

Musks ambitions for X include transforming the platform into an everything app like Chinas WeChat, for instance. The problem? Its not clear if U.S. and Western audiences are keen on the idea. And Musk himself has been pretty vague on the specifics.

While X contends with an identity crisis, some users began looking for a replacement. Mastodon was one contender, along with Bluesky, which actually grew out of Twitter a pet project of former CEO Jack Dorsey, who still sits on its board of directors.

When tens of thousands of people, many of them fed-up Twitter users, began signing up for the (still) invite-only Bluesky in the spring, the app had less than 10 people working on it, said CEO Jay Graber recently.

This meant scrambling to keep everything working, keeping people online, scrambling to add features that we had on the roadmap, she said. For weeks, the work was simply scaling ensuring that the systems could handle the influx.

We had one person on the app for a while, which was very funny, and there were memes about Paul versus all of Twitters engineers, she recalled. I dont think we hired a second app developer until after the crazy growth spurt.

Seeing an opportunity to lure in disgruntled Twitter users, Facebook parent Meta launched its own rival, Threads, in July. It soared to popularity as tens of millions began signing up though keeping people on has been a bit of a challenge. Then, in December, Meta CEO Mark Zuckerberg announced in a surprise move that the company was testing interoperability the idea championed by Mastodon, Bluesky and other decentralized social networks that people should be able to use their accounts on different platforms kind of like your email address or phone number.

Starting a test where posts from Threads accounts will be available on Mastodon and other services that use the ActivityPub protocol, Zuckerberg posted on Threads in December. Making Threads interoperable will give people more choice over how they interact and it will help content reach more people. Im pretty optimistic about this.

Social medias impact on childrens mental health hurtled toward a reckoning this year, with the U.S. surgeon general warning in May that there is not enough evidence to show that social media is safe for children and teens and calling on tech companies, parents and caregivers to take immediate action to protect kids now.

Were asking parents to manage a technology thats rapidly evolving that fundamentally changes how their kids think about themselves, how they build friendships, how they experience the world and technology, by the way, that prior generations never had to manage, Dr. Vivek Murthy told The Associated Press. And were putting all of that on the shoulders of parents, which is just simply not fair.

In October, dozens of U.S. states sued Meta for harming young people and contributing to the youth mental health crisis by knowingly and deliberately designing features on Instagram and Facebook that addict children to its platforms.

In November, Arturo Bjar, a former engineering director at Meta, testified before a Senate subcommittee about social media and the teen mental health crisis, hoping to shed light on how Meta executives, including Zuckerberg, knew about the harms Instagram was causing but chose not to make meaningful changes to address them.

The testimony came amid a bipartisan push in Congress to adopt regulations aimed at protecting children online. In December, the Federal Trade Commission proposed sweeping changes to a decades-old law that regulates how online companies can track and advertise to children, including turning off targeted ads to kids under 13 by default and limiting push notifications.

Your AI friends have arrived but chatbots are just the beginning. Standing in a courtyard at his companys Menlo Park, California headquarters, Zuckerberg said this fall that Meta is focused on building the future of human connection and painted a near-future where people interact with hologram versions of their friends or coworkers and with AI bots built to assist them. The company unveiled an army of AI bots with celebrities such as Snoop Dogg and Paris Hilton lending their faces to play them that social media users can interact with.

Next year, AI will be integrated into virtually every corner of the platforms, Enberg said.

Social apps will use AI to drive usage, ad performance and revenues, subscription sign ups, and commerce activity. AI will deepen both users and advertisers reliance and relationship with social media, but its implementation wont be entirely smooth sailing as consumer and regulatory scrutiny will intensify, she added.

The analyst also sees subscriptions as an increasingly attractive revenue stream for some platforms. Inspired by Musks X, subscriptions started as a way to diversify or boost revenues as social ad businesses took a hit, but they have persisted and expanded even as the social ad market has steadied itself.

With major elections coming up in the U.S. and India among other countries, AIs and social medias role in misinformation will continue to be front and center for social media watchers.

Were not prepared for this, A.J. Nash, vice president of intelligence at the cybersecurity firm ZeroFox, told the AP in May. To me, the big leap forward is the audio and video capabilities that have emerged. When you can do that on a large scale, and distribute it on social platforms, well, its going to have a major impact.

Excerpt from:

The year of social media soul-searching: Twitter dies, X and Threads are born and AI gets personal - The Associated Press

Read More..

AI Says Painting Attributed to Raphael Includes Contributions from Other Artists – ARTnews

A masterpiece hanging in the Museo Nacional del Prado in Madrid has long sparked debates over whether it was the work of Raphael. But a group of researchers now claims to have finally solved the mystery through the use of an artificial intelligence algorithm.

The Madonna della Rosa (Madonna of the Rose) depicts Mary, Joseph, and the baby Jesus, along with an infant version of John the Baptist. Until the 19th century, the painting was attributed to the Italian Renaissance painter Raffaello Sanzio da Urbino, more often known as Raphael. Then doubts were raised over the Joseph figure looking like an afterthought and whether Raphael had painted the lower section.

The museums website page for the oil painting solely credits it to Raphael.

According to a new research paper published on December 21 in the journal Heritage Science, analysis of the painting using an AI algorithm with an accuracy of 98 percent found that the painting was entirely made by the Italian artist. But it raised questions about whether Raphael indeed painted the face of Joseph in the painting.

The researchers, led by University of Bradford visual computer professor Hussan Ugail, noted that the AI analysis supported earlier work by art historians who had previously questioned the full attribution of this painting to Raphael alone, suggesting that his associate, Giulio Romano, might have had a hand in it.

University of Bradford emeritus professor of molecular spectroscopy Howell Edwards, who co-authored the paper, told the Guardian: The AI program analysis of our work has demonstrated conclusively that whereas the three figures of the Madonna, [Jesus] and St John the Baptist are unequivocally by Raphael, that of St Joseph is not, and has been painted by someone else.

In January, Ugail was part of a team of researchers who used AI-assisted computer-based facial recognition on a painting known as the de Brcy Tondo to also help determine it was a work by Raphael. The research team found that the faces of the Madonna and child in the de Brcy Tondo were identical to ones in the Raphael altarpieceSistine Madonna. Then another study called into question the results of that research, and museum experts raised questions about the methodology.

Ugail told the Guardian that he knows nothing about art, and that the reception to his work from art historians can be frosty. I think there is fear and they also think we are naive, that we dont know what we are doing, he said.

While there is ongoing concern over how the use of AI will eliminate the work of human beings, the research team emphasized in the conclusion of their Heritage Science paper that AI could become a useful resource for art historians and collectors as a supplementary tool for verifying paintings alongside existing methods such as scholarly analysis, spectroscopic imaging, and dating techniques.

As advances continue to be made in machine learning and image processing technologies, this method has the potential to become part of an array of tools for artwork analysis and verification, the paper said. It can operate in conjunction with other methods currently in use, including in-depth scrutiny by art historians and various advanced imaging techniques, thus contributing to a more thorough and dependable framework for artwork authentication and analysis.

Continued here:

AI Says Painting Attributed to Raphael Includes Contributions from Other Artists - ARTnews

Read More..

Why Bill Gates Says AI Will Supercharge Medical Innovations – CNET

Bill Gates' first grandchild was born in 2023, so the year will forever be special to him, he says. It was also the year that artificial intelligence went mainstream, spurred by the arrival of ChatGPT. And that got Gates thinking about how the world his granddaughter is coming into will change in a positive way because of AI.

The co-founder of Microsoft and a guiding figure of the PC era across several decades, Gates knows a thing or two about technological revolutions. He sees 2024 as a monumental year for artificial intelligence, with the technology becoming especially important in global health, where Gates and his namesake foundation have been working for decades.

"We now have a better sense of what types of jobs AI will be able to do by itself and which ones it will serve as a copilot for," Gates wrote in a lengthy post on his GatesNote blog this week. "And it's clearer than ever how AI can be used to improve access to education, mental health, and more. It motivates me to make sure this technology helps reduce -- and doesn't contribute to -- the awful inequities we see around the world."

It's been quite a year for AI, and more specifically, generative AI. Gen AI goes a step further than other AI methods. It can create new materials, such as text, images, speech or video, based on its own understanding of the patterns it recognizes in data.

Gen AI became known thanks to the launch of OpenAI's ChatGPT in late 2022, although smart home controls and AI-powered virtual assistants such as Alexa had already made inroads into homes and popular culture.

ChatGPT, the frontrunner in the onslaught of generative AI tools released over the last year, allows anyone with a smartphone or a laptop to use AI for generating information or images. These tools been trained on huge swaths of data that allow it to come up with original responses to our queries -- with varying degrees of success. More than 100 million people use ChatGPT each week, OpenAI chief executive Sam Altman said in November. Microsoft is a significant investor in OpenAI.

Other companies aren't ceding territory to Microsoft. In early December, for instance, Google began updating its Bard AI chatbot with a new AI model called Gemini that provides improved text-based chat abilities. Tech companies are continuing to add Gen AI abilities into programs and devices of all kinds, from search engines to smart phones.

In 2023, investors pourednearly $10 billioninto gen AI startups, more than double the $4.4 billion invested the year before, according toGlobalData.

But even as Gen AI explodes in popularity, many users are still cautious. In addition to concerns that AI could replace human employees, many worry about it putting forth inaccurate information. Dictionary.com selected the AI term "hallucinate," describing what happens when AI produces false information, as its word of the year.

Gates thinks mainstream integration of AI is coming soon.

"If I had to make a prediction, in high-income countries like the United States, I would guess that we are 18-24 months away from significant levels of AI use by the general population," he wrote.

But he also sees 2024 as a turning point.

Since stepping down as Microsoft CEO in 2000, Gates has focused on philanthropy, founding the Bill & Melinda Gates Foundation with his now-former wife. It's in areas related to his foundation's work in global health where Gates sees AI becoming helpful in 2024.

Fighting antibiotic resistance: He cites an AI-powered tool under development at the Aurum Institute in Ghana that helps health workers prescribe antibiotics without contributing to antimicrobial resistance, where pathogens learn how to get past antibiotic defenses. The tool can comb through all the available information about antimicrobial resistance and suggest the best drug plan for a patient.

High-risk pregnancy help: A woman dies in childbirth every two minutes, Gates says. He's hopeful that AI can combat this horrifying statistic. AI-powered ultrasounds can help identify pregnancy risks, and the Gates foundation is working to fund that process. Also, AI researchers at ARMMAN, an India-based nonprofit organization, are working on a large language model the technology that underlies ChatGPT and other AI chatbots that can help health workers treating high-risk pregnancies.

HIV risk assessment: Many people aren't comfortable talking to a doctor about their sexual history, but that can be vital for assessing risk for diseases like HIV. Gates is excited about a South African chatbot called Your Choice, being developed by Sophie Pascoe of Wits Health Consortium. The chatbot acts as a nonjudgmental counselor that can provide round-the-clock advice, especially for vulnerable populations.

Quick access to medical records: While people in rich countries may have their medical records easily available, in other countries, many people don't have a documented medical history, Gates says. This can hinder their medical treatment because their doctors need to know about allergies, past health issues and more. A Pakistani team is working on a voice-enabled mobile app that could make this easier, asking a series of questions and filling out a patient's medical record with the answers.

Gates also sees AI assisting in education, calling AI education tools "mindblowing," as they are tailored to individual learners, and says they will "only get better." He's excited about how the technology can be localized to students in many different countries and cultural contexts.

Not everything on Gates' mind is AI-related. He's concerned about climate change, saying he's "blown away by the passion from young climate activists," and hopeful that 2024 will see more investment in innovations that will help those who are most affected by the climate crisis.

And he even plunges into the debate over nuclear energy. Gates notes that high-profile disasters such as Chernobyl in the 1980s and Three Mile Island in the late 1970s have spotlighted the risks, but over the past year, he's seen a shift towards acceptance. He sees the once-bogeyman of the energy world as necessary to meet the world's growing need for energy while eliminating carbon emissions.

A New York Times in early December noted that Gates was "long skeptical" of what AI could do. That changed in August 2022, when he saw a demonstration of OpenAI's GPT-4, the large language model underlying ChatGPT. That sold Gates on the concept, and he helped Microsoft "move aggressively to capitalize on generative AI."

Although Gates left Microsoft's's board in 2020, he's still an adviser to its CEO Satya Nadella. Microsoft has plunged full-bore into the AI world, The company invested heavily in OpenAI, the creator of ChatGPT, earlier this year. And it's been adding the technology across its online services, including its Bing search engine.

The company also reimagined Windows 11 with the addition of Microsoft Copilot, which puts AI assistance always available on the Windows 11 desktop taskbar. Microsoft vice president Yusuf Mehdi calls it the most significant update to the operating system so far, and it works across multiple apps and mobile phones.

Gates' year-end letter compares the rise of AI to that of the internet, email and search engines, noting that it wasn't long ago when many people were unfamiliar with these things, and now they are part of our daily lives. Gates sees the same kind of sea change coming with AI.

But he admits that it won't be smooth, giving an example from his own life.

"I thought I would use AI tools for the foundation's strategy reviews this year, which require reading hundreds of pages of briefing materials that an AI could accurately summarize for me," Gates says.

But that didn't happen.

"Old habits are hard to break, and I ended up preparing for [the reviews] the same way I always do," he writes.

Editors' note: CNET is using an AI engine to help create some stories. For more, see our AI policy.

Read this article:

Why Bill Gates Says AI Will Supercharge Medical Innovations - CNET

Read More..

New AI model can predict human lifespan, researchers say. They want to make sure it’s used for good – Phys.org

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

proofread

close

Researchers have created an artificial intelligence tool that uses sequences of life eventssuch as health history, education, job and incometo predict everything from a person's personality to their mortality.

Built using transformer models, which power large language models (LLMs) like ChatGPT, the new tool, life2vec, is trained on a data set pulled from the entire population of Denmark6 million people. The data set was made available only to the researchers by the Danish government.

The tool the researchers built based on this complex set of data is capable of predicting the future, including the lifespan of individuals, with an accuracy that exceeds state-of-the-art models. But despite its predictive power, the team behind the research says it is best used as the foundation for future work, not an end in and of itself.

"Even though we're using prediction to evaluate how good these models are, the tool shouldn't be used for prediction on real people," says Tina Eliassi-Rad, professor of computer science and the inaugural President Joseph E. Aoun Professor at Northeastern University. "It is a prediction model based on a specific data set of a specific population."

close

Eliassi-Rad brought her AI ethics expertise to the project. "These tools allow you to see into your society in a different way: the policies you have, the rules and regulations you have," she says. "You can think of it as a scan of what is happening on the ground."

By involving social scientists in the process of building this tool, the team hopes it brings a human-centered approach to AI development that doesn't lose sight of the humans amid the massive data set their tool has been trained on.

"This model offers a much more comprehensive reflection of the world as it's lived by human beings than many other models," says Sune Lehmann, author on the paper, which was recently published in Nature Computational Science. A Research Briefing on the topic is presented in the same journal issue.

At the heart of life2vec is the massive data set that the researchers used to train their model. The data is held by Statistics Denmark, the central authority on Danish statistics, and, although tightly regulated, can be accessed by some members of the public, including researchers. The reason it's so tightly controlled is it includes a detailed registry of every Danish citizen.

The many events and elements that make up a life and are spelled out in the data, from health factors and education to income. The researchers used that data to create long patterns of recurring life events to feed into their model, taking the transformer model approach used to train LLMs on language and adapting it for a human life represented as a sequence of events.

"The whole story of a human life, in a way, can also be thought of as a giant long sentence of the many things that can happen to a person," says Lehmann, a professor of networks and complexity science at DTU Compute, Technical University of Denmark and previously a postdoctoral fellow at Northeastern.

The model uses the information it learns from observing millions of life event sequences to build what is called vector representations in embedding spaces, where it starts to categorize and draw connections between life events like income, education or health factors. These embedding spaces serve as a foundation for the predictions the model ends up making.

One of the life events that the researchers predicted was a person's probability of mortality.

"When we visualize the space that the model uses to make predictions, it looks like a long cylinder that takes you from low probability of death to high probability of death," Lehmann says. "Then we can show that in the end where there's high probability of death, a lot of those people actually died, and in the end where there's low probability of dying, the causes of death are something that we couldn't predict, like car accidents."

The paper also illustrates how the model is capable of predicting individual answers to a standard personality questionnaire, specifically when it comes to extroversion.

Eliassi-Rad and Lehmann note that although the model makes highly accurate predictions, they are based on correlations, highly specific cultural and societal contexts and the kinds of biases that exist in every data set.

"This kind of tool is like an observatory of societyand not all societies," Eliassi-Rad says. "This study was done in Denmark, and Denmark has its own culture, its own laws and its own societal rules. Whether this can be done in America is a different story."

Given all those caveats, Eliassi-Rad and Lehmann view their predictive model less like an end product and more like the beginning of a conversation. Lehmann says major tech companies have likely been creating these kinds of predictive algorithms for years in locked rooms. He hopes this work can start to create a more open, public understanding of how these tools work, what they are capable of, and how they should and shouldn't be used.

More information: Germans Savcisens et al, Using sequences of life-events to predict human lives, Nature Computational Science (2023). DOI: 10.1038/s43588-023-00573-5

A transformer method that predicts human lives from sequences of life events, Nature Computational Science (2023). DOI: 10.1038/s43588-023-00586-0

Journal information: Nature Computational Science

Original post:

New AI model can predict human lifespan, researchers say. They want to make sure it's used for good - Phys.org

Read More..

Chamath Palihapitiya says venture capitalists also face disruption from AIand startup founders stand to benefit – Fortune

Artificial intelligence has been inescapable this year. After OpenAI released ChatGPT some 13 months ago, attention turned to how such tools will disrupt careers and industriesand eager venture capitalists poured billions into AI startups that might do the disrupting.

But VCs themselves could get disrupted, according to billionaire investor Chamath Palihapitiya, a former Facebook executive and the CEO of VC firm Social Capital.

We talk about AI as a big disruptor to the big companies and this and that, but AI may be the biggest disruptor to VC in the end, Palihapitiya said on the All-In Podcast this week.

A world where AI proliferates, he said, is positive for founders, who will be able to own more of their companies rather than give away too much equity to VCs.

In the past, he said, a tech startup with $2 million in seed funding might hire seven people and have enough capital to survive for a year and a half, after which it hopefully gained enough traction so that investors would pony up $10 million or $15 million in Series A funding. The downside, of course, is that in exchange for capital, VCs want equity in the company.

But AI tools give founders more leverage, Palihapitiya said, mentioning GitHub Copilot, which makes creating and fixing code much easier. Startups can now hire programmers, perhaps in other countries with lower pay rates, to use such tools to get more done faster, he noted.

The upshot is that, today, a tech startup with the same amount of seed funding might have a three- or four-person team and survive on that $2 million for four years rather than a year a half. Founders could then end up owning 80% of their company with the potential to exit for $50 million or $100 million, and theyve made more money than in a traditional outcome he said.

Its only a matter of time, Palihapitiya added, until they can put two and two together in an Excel spreadsheet to figure out that owning 50% of a $100 million company is greater than owning 18% of some other company when youre massively diluted, or 8% or whatever.

Jason Calacanis, an angel investor, responded that now, instead of founders of a particular cohort competing on who can raise the most money at the highest valuation, hes seen them shifting to, how do I get to profitability and how do I own as much of my company as possible?

Palihapitiya became the face of the SPAC boom-and-bust a few years back due to his involvement with special purpose acquisition companiesshell corporations listed on a stock exchange that acquire a private company, thereby making it public sans the rigors of the IPO process.

This isnt the first time he has mulled the role of VCs in an AI-altered world.

It seems pretty reasonable and logical, he said last month on the podcast, that AI productivity gains will lead to tens or hundreds of millions of startups made up of only one or two people.

Theres a lot of sort of financial engineering that kind of goes away in that world, he said. I think the job of the venture capitalist changes really profoundly. I think theres a reasonable case to make that it doesnt exist.

Excerpt from:

Chamath Palihapitiya says venture capitalists also face disruption from AIand startup founders stand to benefit - Fortune

Read More..