Page 1,079«..1020..1,0781,0791,0801,081..1,0901,100..»

Real-time deepfake detection: How Intel Labs uses AI to fight … – ZDNet

A few years ago, deepfakes seemed like a novel technology whose makers relied on serious computing power. Today, deepfakes are ubiquitous and have the potential to be misused for misinformation, hacking, and other nefarious purposes.

Intel Labs has developed real-time deepfake detection technology to counteract this growing problem. Ilke Demir, a senior research scientist at Intel, explains the technology behind deepfakes, Intel's detection methods, and the ethical considerations involved in developing and implementing such tools.

Also:Today's AI boom will amplify social problems if we don't act now, says AI ethicist

Deepfakes are videos, speech, or images where the actor or action is not real but created by artificial intelligence (AI). Deepfakes use complex deep-learning architectures, such as generative adversarial networks, variational auto-encoders, and other AI models, to create highly realistic and believable content. These models can generate synthetic personalities, lip-sync videos, and even text-to-image conversions, making it challenging to distinguish between real and fake content.

The term deepfake is sometimes applied to authentic content that has been altered, such as the 2019 video of former House Speaker Nancy Pelosi, which was doctored to make her appear inebriated.

Demir's team examines computational deepfakes, which are synthetic forms of content generated by machines. "The reason that it is called deepfake is that there is this complicated deep-learning architecture in generative AI creating all that content," he says.

Also:Most Americans think AI threatens humanity, according to a poll

Cybercriminals and other bad actors often misuse deepfake technology. Some use cases include political misinformation, adult content featuring celebrities or non-consenting individuals, market manipulation, and impersonation for monetary gain. These negative impacts underscore the need for effective deepfake detection methods.

Intel Labs has developed one of the world's first real-time deepfake detection platforms. Instead of looking for artifacts of fakery, the technology focuses on detecting what's real, such as heart rate. Using a technique called photoplethysmography -- the detection system analyzes color changes in the veins due to oxygen content, which is computationally visible -- the technology can detect if a personality is a real human or synthetic.

"We are trying to look at what is real and authentic. Heart rate is one of [the signals]," said Demir. "So when your heart pumps blood, it goes to your veins, and the veins change color because of the oxygen content that color changes. It is not visible to our eye; I cannot just look at this video and see your heart rate. But that color change is computationally visible."

Also:Don't get scammed by fake ChatGPT apps: Here's what to look out for

Intel's deepfake detection technology is being implemented across various sectors and platforms, including social media tools, news agencies, broadcasters, content creation tools, startups, and nonprofits. By integrating the technology into their workflows, these organizations can better identify and mitigate the spread of deepfakes and misinformation.

Despite the potential for misuse, deepfake technology has legitimate applications. One of the early uses was the creation of avatars to better represent individuals in digital environments. Demir refers to a specific use case called "MyFace, MyChoice," which leverages deepfakes to enhance privacy on online platforms.

In simple terms, this approach allows individuals to control their appearance in online photos, replacing their face with a "quantifiably dissimilar deepfake" if they want to avoid being recognized. These controls offer increased privacy and control over one's identity, helping to counteract automatic face-recognition algorithms.

Also:GPT-3.5 vs GPT-4: Is ChatGPT Plus worth its subscription fee?

Ensuring ethical development and implementation of AI technologies is crucial. Intel's Trusted Media team collaborates with anthropologists, social scientists, and user researchers to evaluate and refine the technology. The company also has a Responsible AI Council, which reviews AI systems for responsible and ethical principles, including potential biases, limitations, and possible harmful use cases. This multidisciplinary approach helps ensure that AI technologies, like deepfake detection, serve to benefit humans rather than cause harm.

"We have legal people, we have social scientists, we have psychologists, and all of them are coming together to pinpoint the limitations to find if there's bias -- algorithmic bias, systematic bias, data bias, any type of bias," says Dimer. The team scans the code to find "any possible use cases of a technology that can harm people."

Also: 5 ways to explore the use of generative AI at work

As deepfakes become more prevalent and sophisticated, developing and implementing detection technologies to combat misinformation and other harmful consequences is increasingly important. Intel Labs' real-time deepfake detection technology offers a scalable and effective solution to this growing problem.

By incorporating ethical considerations and collaborating with experts across various disciplines, Intel is working towards a future where AI technologies are used responsibly and for the betterment of society.

Original post:

Real-time deepfake detection: How Intel Labs uses AI to fight ... - ZDNet

Read More..

Vitalik Buterin Linked Whale Moves 2,013 Ethereum to OKX – CoinGape

Any significant on-chain movement of assets by a whale or prominent figure can spark intrigue and speculation among investors in the crypto space, a trend that is now being observed with one of the latest Ethereum transfers in the industry.

Recently, an Ethereum whale linked to Vitalik Buterin, the co-founder of Ethereum, transferred 2,013 Ethereum (ETH) to the OKX crypto exchange.

As of the time of writing, crypto analytics insight provider Lookonchain revealed in a recent tweet that the whale deposited the sum which is worth approximately $3.76 million only 2 hours ago. However, this is only one of the large-scale transactions involving this Buterin-linked whale.

Between December 30, 2022, and May 18, 2023, the address 0x9e92, which is connected with the whale, got a total of 22,300 ETH, equivalent to around $41.6 million, straight from Vitalik Buterin himself.

The recent substantial deposits made by the whale linked to Vitalik Buterin to the OKX exchange have ignited speculation within the crypto community about the potential implications and whether it signals an imminent selloff. While these transactions raise questions about their implications for the Ethereum market, it is crucial to approach them with a balanced perspective.

It is worth noting that Buterin, like any other person, may have personal financial demands or want to diversify his holdings. A move like this could be part of a larger plan to rebalance his crypto portfolio or cover personal costs. Should this be the aim of the fund movement, the transfer may not signify an impending selloff for the second-largest digital currency.

Additionally, it is plausible that the transferred funds are intended for charitable contributions or to support projects that align with Buterins vision for the blockchain ecosystem. If this is the case, it is a great step for the Ethereum community and may contribute to the cryptocurrencys long-term growth and adoption.

Another possibility is that the transfer was motivated by the need to take advantage of key services such as staking and Earn programs on OKX. Whales like Justin Sun are known to make such related moves and overall, such on-chain mass transfers do not always spark instability within their respective ecosystems.

At the time of writing, Ethereum is changing hands at a price of $1866.99, up by a negligible 0.4%.

Originally posted here:

Vitalik Buterin Linked Whale Moves 2,013 Ethereum to OKX - CoinGape

Read More..

Bias in AI is real. But it doesn’t have to exist. – POLITICO

With help from Ella Creamer, Brakkton Booker and Ben Weyl

POLITICO illustration/Photo by AP

Hello, Recast friends! House Republicans narrowly passed a contentious defense bill and a major Alaska political rematch is on the way. But here for you is a fascinating interview between Mohar Chatterjee, a technology reporter at POLITICO, and AI ethicist Rumman Chowdhury. You can read the first part of this interview in last weeks Digital Future Daily.

Today, were diving deeper into the intersection of identity and technology. With AI igniting widespread public adoption and anxiety, the struggle to get this technology right is real. Its critics are worried that AI systems particularly large language models trained on massive quantities of data might have biases that deepen existing systemic discrimination.

These are not theoretical worries. Infamous examples of bias include a racially prejudiced algorithm used by law enforcement to identify potential repeat offenders, Amazons old AI-powered hiring tool that discriminated against women, and more recently, the ability to prompt ChatGPT to make racist or sexist inferences.

Getting these powerful AI systems to reveal all their pitfalls is a tall order but its one thats found interest from federal government agencies, industry leaders and the technologys day-to-day users. The Commerce Departments National Telecommunications and Information Administration is seeking feedback on how to support the development of AI audits, while the White Houses Office of Science and Technology Policy is gathering input from the public on what national priorities for AI should be.

We spoke to AI ethicist Rumman Chowdhury about her hopes and fears for this quickly spreading technology. Previously the director of Twitters META (Machine Learning Ethics, Transparency, and Accountability) team and the head of Accentures Responsible AI team, Chowdhurys expertise has been tapped by both Congress and the White House in recent months. She appeared as a witness at a June hearing held by the House Science, Space and Technology Committee to testify on how AI can be governed federally without stifling innovation. She is also one of the organizers for the White House-endorsed hacking exercise on large language models (called a red-teaming exercise) to be held at a large hacker conference called DEFCON in August. The exercise is meant to publicly identify potential vulnerabilities in these large AI models.

Chowdhury is currently a Responsible AI Fellow at the Harvard Berkman Klein Center and chief scientist at Parity Consulting.

Was The Recast forwarded to you by a friend? Dont forget to subscribe to the newsletter here.

Youll get a twice-weekly breakdown of how race and identity are the DNA of American politics and policy.

This interview has been edited for length and clarity.

THE RECAST: For many people, AI conjures notions of a dystopian future with machines taking over every aspect of our lives. Is that fear justified?

CHOWDHURY: At the current state of artificial intelligence, that fear is fully unjustified. And even in a future state of artificial intelligence, we cant forget that human beings have to build the technology and human beings have to implement the technology. So even in more of a stretch of the imagination, AI does not come alive and actively make decisions to harm people. People have to build the technology, and people have to implement it, for it to be harmful.

Examples of bias in AI include the ability to prompt ChatGPT to make racist or sexist inferences. | Richard Drew/AP Photo

THE RECAST: What does bias in AI look like? How will marginalized communities be impacted? Whats your biggest fear?

CHOWDHURY: Youve touched on exactly what my biggest fear is. My biggest fear is the problems we already see today, manifesting themselves in machine learning models and predictive modeling. And early AI systems already demonstrate very clear societal harm, reflecting bias in society.

So for example, if you look at medical implementations of artificial intelligence and machine learning, youll see how members of the African American community in particular, Black women are not treated well by these models because of the history of being not treated well by physicians. We see similar things in terms of biases in the workplace against minority groups. Over and over again, we have many clearly documented instances of this happening with AI systems. This doesnt go away because we make more sophisticated or smarter systems. All of this is actually rooted in the core data. And its the same data all of these models are trained on.

THE RECAST: Your team at Accenture built the first enterprise algorithmic bias detection and mitigation tool. Why were you concerned about bias in AI back then and how has this concern evolved?

CHOWDHURY: In the earlier days of responsible AI were talking 2017-2018 we were actually in a very similar state of having more philosophical conversations. There were a few of us and we were, frankly, primarily women who talked about the actual manifestation of real societal harm and injury. And some of the earliest books written about algorithmic bias came out a few years before that or around that time. In particular: Safiya Nobles Algorithms of Oppression, Virginia Eubanks Automating Inequality, Cathy ONeils Weapons of Math Destruction, all talk about the same topic. So the issue became: How do we create products and tools that work at the scale at which companies move to help them identify and stop harms before they go ahead building technology?

THE RECAST: Your team at Twitter discovered that the platforms algorithm favored right-wing posts. Googles Timnit Gebru blew the whistle on ethical dilemmas posed by large language models. Why do you think so many whistleblowers in tech are women particularly women of color?

CHOWDHURY: To clarify, this was during my time leading the machine learning ethics team at Twitter. So this work actually wasnt whistleblowing. This was approved by the company, we did this, you know, in conjunction with leadership. What we found in that study was that Twitters machine learning algorithm amplified center-right content in seven out of eight countries. What we werent able to find out was whether this was due to algorithmic bias or whether its due to human behavior. Those are actually two different root causes that have two different solutions.

Unfortunately, in many tech situations, my case is rare. Very often, issues that are raised by women and women of color get ignored in the workplace, because, more broadly, women of color tend to not be listened to in general. So its unsurprising to me that after having exhausted every internal channel or possibility, people who are typically ignored have to turn to more extreme measures.

Being a whistleblower is not romantic its actually very, very difficult for most individuals. If you think about what being a whistleblower means, you have essentially blackballed yourself from the industry that youve worked in the industry that you care about.

Unfortunately, this is more likely to happen to women of color. We are more likely to identify issues and have a stronger sense of justice and this desire to fix a problem, but simultaneously, we are more likely to be ignored. But again, I will say my example at Twitter was actually a rare case of that not happening.

Chowdhury was fired by Elon Musk shortly after he took over Twitter. | Susan Walsh/AP Photo

THE RECAST: You were fired by Elon Musk shortly after he took over Twitter. What are your thoughts on why you were a target?

CHOWDHURY: I dont see the kind of work that the machine learning ethics team did being aligned with the kind of company Elon Musk wants to run.

If we just look at the Twitter files, we look at the kinds of people hes attacked. Some of them being folks like Yoel Roth people who did things like trust and safety. The kind of work that my team did is very aligned with the work of teams that he is not funding or prioritizing. Frankly, hes firing teams that did that work. To be honest, I dont think I would have worked for that company anyway.

THE RECAST: When you testified before Congress last month, you said, Artificial intelligence is not inherently neutral, trustworthy, nor beneficial. Can you talk a little more about that?

CHOWDHURY: I very intentionally picked those words. There is this misconception that a sufficiently advanced AI model trained on significant amounts of data will somehow be neutral. How this technology is designed and who it is designed for is very intentional, and can build in biases. So technology is not neutral.

These models are also not inherently trustworthy. That ties to a term that I coined called moral outsourcing: this idea that technology is making these decisions and that the people making the decisions behind the scenes have no agency or no responsibility. Trustworthiness comes from building institutions and systems of accountability. Theres nothing inherently trustworthy about these systems, simply because they sound smart or use a lot of data or have really, really complex programming.

And just because you build something with the best of intentions doesnt actually mean that its going to inherently be beneficial. Theres actually nothing inherently beneficial about AI. We either build it to be beneficial in use or we dont.

THE RECAST: Why do we need a global AI governance organization as you mentioned in your congressional testimony?

CHOWDHURY: There are a couple of scenarios that are not great. One would be a splintering of technology. We are actually living in an era of splintered social media, which means that people get information mediated via different sources. That actually deepens rifts between different kinds of people. If somebody in China or Russia sees a very different take on whats happening in Ukraine compared to somebody in the U.S., their fundamental understanding of what is truth is very different. That makes it difficult for people to actually live in a globalized world.

Another thing that I am concerned with in creating global regulation is that the majority of the Global South is not included. Im part of the OECDs working group on AI governance, but these narratives are coming out of Europe, UK or the U.S. I just dont want there to be a lack of mindfulness, when creating global governance, in assuming that the Global South has nothing to say.

And there are some questions that actually are not global scale questions to ask. So this global entity, in order for it to supersede national sovereignty, these have to be really, really big questions. The way Ive been framing it is: What is the climate change of AI? What are the questions that are so big, they cant be addressed by a country or a company, and we need to push it up? So the default shouldnt be, Oh, clearly punt this to the global entity, it should be an exception rather than the rule.

You did it! You made it to Friday! And were sending you into the weekend with some news and a few must-reads and must-sees.

Divided on Defense: The GOP-led House passed a controversial defense bill Friday that targets the Pentagons policy on abortions, medical care for transgender troops and diversity issues. POLITICOs Connor OBrien reports that it doesnt have a shot at passing the Senate.

Alaska Grudge Match: Republican Nick Begich says hes making another run at Alaskas at-large congressional seat, once again challenging 2023 Power List nominee Rep. Mary Peltola, a Democrat. POLITICOs Eric Bazail-Eimil has more.

The crisis over American manhood is really code for something else, according to a new POLITICO Magazine piece from Virginia Heffernan.

A Korean conglomerate endeavors to build an elevator into the sky in Djunas part noir, part cyberpunk novel Counterweight, out now.

Earth Mama movingly traces the life of Gia (Tia Nomore), a mother trying to regain custody of her two kids in foster care.

Lakota Nation vs. United States weaves archival footage, interviews and images in its depiction of the tribes 150-year struggle to regain a homeland.

The surprise collab we never knew we needed: BTS Jung Kook and Latto on an energetic new bop, Seven.

Karol G drops S91, an emotional anthem inspired by a Bible verse, and a music video featuring a cross made of speakers.

TikTok of the Day: Generational differences

More:

Bias in AI is real. But it doesn't have to exist. - POLITICO

Read More..

This is how generative AI will change the gig economy for the better – ZDNet

Artificial intelligence will augment work and could add more opportunities to the job market rather than tank it, according to tech executive Gali Arnon. While some fear that AI will erase huge numbers of roles, Arnon argues that AI will accelerate the pace of job creation, augment work, and accelerate startup opportunities.

In an interview with ZDNET, Arnon, CMO of Fiverr, a platform that connects freelancers with work opportunities, saysgenerative artificial intelligence is smart, but it can't dominate the economy because its capabilities are narrow and limited to specific tasks.

Also: 5 ways to explore the use of generative AI at work

Arnon says Fiverr data shows that freelancers are using AI as a "tool" that augments creative work, but doesn't replace humans. Instead, she says AI is creating "new jobs, new opportunities" because it speeds up manual and analog work, allowing freelancers to spend more time on creative and interpersonal tasks.

When it comes to integrating AI into business services, there are several examples that demonstrate the technology's potential for augmenting human work. For instance, generative AI can help writers and journalists by quickly extracting key points and quotes from a transcript, saving time and improving efficiency.

AI can also be used to create artwork, optimize customer support processes, and even aid in code-writing processes. The key to success is finding the right balance between using AI and maintaining the human touch.

Also:How to use ChatGPT to create an app

Arnon says creative professionals are learning to master prompts for generative AI systems. Basic prompts produce low-quality results, but experts can chain prompts to multiple AI systems to produce unique and high-quality images, audio, and text.

She says some of the best creative professionals edit AI-generated outputs in other applications, such as Adobe's Creative Cloud. The end results can be high in quality and unique in style. Arnon says professionals are augmenting their skills with AI, "to use it in a way that will just set the bar higher, set a new standard" of quality.

However, theethical considerations when using generative AI and creative work are nuanced and challenging. One question employers must answer for their organizations is whether using AI-generated content, such as artwork or text, is considered cheating.

Also:Generative AI is coming for your job. Here are 4 reasons to get excited

Arnon believes that as long as freelancers are transparent about their use of AI tools -- and do not claim the work as their own -- there is no ethical issue. The real challenge lies in ensuring that AI is used responsibly and ethically without undermining businesses or society at large.

In the coming months, Arnon believes that generative AI will continue to play a significant role in the future of freelancing and work. She says Fiverr is a microcosm of the broader workforce and reflects emerging trends in the job market. By embracing AI and leveraging its capabilities, businesses and freelancers can create new opportunities and jobs, ultimately benefiting the gig economy.

However, ensuring the ethical and responsible use of AI is crucial for its successful integration into the workforce. Through collaboration between regulators, businesses, and AI developers, it is possible to strike the right balance between innovation and ethical considerations, paving the way for a more efficient and dynamic workplace.

Also:These are my 5 favorite AI tools for work

"We need to find the right checks and balances," Arnon says, "but eventually, I really believe humanity will know how to use AI, and it will make us only better."

Read more:

This is how generative AI will change the gig economy for the better - ZDNet

Read More..

Vitalik Buterin-Affiliated Whale Makes $4 Million Deposit at OKX – U.Today

Alex Dovbnya

Digital currency 'whale' linked to Ethereum co-founder Vitalik Buterin has reportedly deposited staggering $4 million into OKX cryptocurrency exchange

Read U.TODAY on

Google News

A crypto whale associated with Ethereum co-founder Vitalik Buterin made a $4 million deposit to cryptocurrency exchange OKX.

The deposit, totaling 2,013 Ethereum (ETH), was reported by blockchain analytics site Lookonchain. The whale's address, referred to as "0x9e92," has previously been involved in significant transactions with Buterin, creating an undeniable link between the two.

The Lookonchain data shows a series of substantial transactions involving Buterin and the whale, culminating in this recent deposit.

From Dec. 30, 2022, to May 18, the address "0x9e92" received 22,300 ETH ($41.6 million) from Buterin.

This comes amid volatile trading on Ethereum, with its price ranging between $1,865.01 and $1,908.54 over the last 24 hours, according to CoinGecko. As of now, Ethereum holds a market capitalization of $225.3 billion.

With the rise of blockchain analytics, the transparency of transactions on the Ethereum network has allowed observers to follow the flows of the cryptocurrency and speculate on the motives behind these large-scale transfers.

The motives behind the most recent transfers remain unclear.

Read more from the original source:

Vitalik Buterin-Affiliated Whale Makes $4 Million Deposit at OKX - U.Today

Read More..

Do Chinese investors own the majority of ETH? – Protos

A forensic researcher has suggested that Chinese investors may well own the vast majority of ether (ETH) in circulation via a raft of agents, LLCs, directors, advisors, and corporate structures.

As detailed by @BoringSleuth (aka TruthLabs), Ethereum was one of the first coins to distribute its initial supply through a pre-mine. Rather than a Bitcoin-like fair launch distributing coins via mining rewards through the public operation of a blockchain Ethereums co-founders pre-mined the majority of their currencys supply at genesis.

The idea of a pre-mine allocated for early investors and project developers was highly controversial when Ethereum held its initial coin offering (ICO) and it remains so today.

Ethereum has been famously tight-lipped about the owners of most of its earliest wallets. Although Vitalik Buterin and the Ethereum Foundation happily disclose some of their stash, most co-founders and pre-ICO team members received large allocations into anonymous wallets. To this day, most decline to publicly identify their wallets.

Any sophisticated investor would have found it relatively easy to contribute bitcoin to Ethereums ICO through anonymous wallets that allowed them to obfuscate their identities and total holdings. In fact, during the ICO, it was extremely simple to contribute bitcoin via multiple wallets in order to hide ones total ownership as well as circumvent Ethereums poorly-enforced ICO contribution limits.

ICO thresholds per participant proved unenforceable. Indeed, Nansens analysis indicates that just 100 of the 8,800 accounts contributing to Ethereums ICO received a staggering 40% of that ETH sold.

Despite all of these limitations, researchers have made substantial progress in deanonymizing wallets and providing probabilistic estimates for the supply distribution of ETH.

In a previous article, Protos detailed Ethereums ties to China, including ex-Alibaba partners, a director who worked on Chinese-language apps, interest from the Bank of China, and a unique translation of the ETH whitepaper into Mandarin. Chinese individuals also had an influence on controversial matters like Ethereums rollback of The DAO hack in June 2016.

Christine Kim at Galaxy Digital conducted a detailed analysis of ETH supply distribution in June 2022. According to this research, 60% of Ethereums total supply was allocated to users before the networks official launch while 60 million ETH, or about 50% of the total current supply, was allocated to ICO investors.

Read more: Was FTX funded by Chinese capital flight?

Co-founders allocated another 10% (12 million ETH) to early team members and the ostensibly non-profit Ethereum Foundation.

Since then, recipients of the pre-mined supply have sent 10.3 million ETH to exchanges. This means that just 1.3 million ETH havent moved.

But how does this relate to the issue of potential Chinese ownership? Although its impossible to quantify the exact amount owned by Chinese investors, its undeniable that throughout Ethereums history, Chinese investment has always been a top priority.

While raising money for Ethereum, Vitalik Buterin placed great emphasis on courting Chinese investors. For example, he flew to China for a meeting with a number of them, including Wanxiang Group investment manager Deng Chao. Buterin once served as chief scientist at Wanxiang.

Fellow Ethereum Foundation board member Shen Bo apparently told Hashkey Group chairman Xiao Feng that the team worried it might not have enough money to launch the mainnet. The team held at least one all-night meeting about the problem. According to Xiao Feng, that was the moment he suggested contributing $500,000 to Ethereum development under Wanxiang Blockchain Labs name.

Every $1,000 invested into Ethereums ICO is now worth over $6 million.

As described in Laura Shins book The Cryptopians, Buterin tolerated and supported former executive director Ming Chan for far longer than his co-founders wanted, even after shed stirred up drama involving members of the Ethereum Foundation board of directors. His motives for keeping her around for so long were unclear, even with her previous work on online Chinese-language projects.

Buterin also showed an obsession with learning the Chinese language, even letting it cut into his social life and work on Ethereum for weeks. By 2016, his mastery of Chinese was good enough for him to write and present a keynote address for the Shanghai Wanxiang Blockchain International Week.

Buterin and fellow Ethereum co-founder Anthony Di Iorio also joined the ChinaLedger initiative as advisors in May 2016. Supported by the Internet Securities Commission of the Securities Association of China, ChinaLedger aimed to research the creation of Internet of Everything applications that could support businesses working within the Chinese regulatory environment.

Its interesting to note that Ethereums early marketing included the concept of a World Computer that developers could use to build nearly any blockchain-based app. Chinese entrepreneurs like Jeff Yang were already working on smart contract templates for Ethereum and Polygon. Yang was quoted as saying that China lacked a real code dev force hinting at the possibility that, as early as 2017, China recognized that it needed Ethereums developers as much as Buterin had needed Chinese investment money in 2015.

By 2017, Wanxiang had become known as a leading promotor of Ethereum adoption in China something it would have been unlikely to do if it didnt have a significant stake in ETH and Ethereum-based apps.

Some people theorized that Wanxiang was one of several fronts that made it possible for Chinese investors to buy up a majority of the ETH supply without raising suspicion. Again, TruthLabs has theorized that Chinese investors, through their various agents and companies, could control up to two-thirds of the ETH supply.

TruthLabs also found evidence that Wanxiang and its subsidiary, Fenbushi Capital, set up and funded wallets owned by Buterin.

Fenbushi Capital was founded in 2015, the same year that the Ethereum network launched. Buterin had a part-time position at Fenbushi until 2018 and even after leaving the job, retained a position as an advisor.

As major (and possibly majority) token holders, Chinese investors, including Wanxiang and HashKey, may still strongly influence ETHs development. And while much of Ethereums blockchain is impossible to deanonymize, Wanxiangs continued support and Buterins pro-China behavior throughout history could certainly lend a degree of credence to this theory.

Got a tip? Send us an email or ProtonMail. For more informed news, follow us on Twitter, Instagram, Bluesky, and Google News, or subscribe to our YouTube channel.

Read more:

Do Chinese investors own the majority of ETH? - Protos

Read More..

The Last Word on AI and the Atom Bomb – WIRED

In some ways, its hard to understand how this misalignment happened. We created all this by ourselves, for ourselves.

True, were by nature carbon chauvinists, as Tegmark put it: We like to think only flesh-and-blood machines like us can think, calculate, create. But the belief that machines cant do what we do ignores a key insight from AI: Intelligence is all about information processing, and it doesnt matter whether the information is processed by carbon atoms in brains or by silicon atoms in computers.

Of course, there are those who say: Nonsense! Everythings hunky-dory! Even better! Bring on the machines. The sooner we merge with them the better; weve already started with our engineered eyes and hearts, our intimate attachments with devices. Ray Kurzweil, famously, cant wait for the coming singularity, when all distinctions are diminished to practically nothing. Its really the next decades that we need to get through, Kurzweil told a massive audience recently.

Oh, just that.

Even Jaron Lanier, who says the idea of AI taking over is silly because its made by humans, allows that human extinction is a possibilityif we mess up how we use it and drive ourselves literally crazy: To me the danger is that well use our technology to become mutually unintelligible or to become insane, if you like, in a way that we arent acting with enough understanding and self-interest to survive, and we die through insanity, essentially.

Maybe we just forgot ourselves. Losing our humanity was a phrase repeated often by the bomb guys and almost as often today.The danger of out-of-control technology, my physicist friend wrote, is the worry that we might lose some of that undefinable and extraordinary specialness that makes people human. Seven or so decades later, Lanier concurs. We have to say consciousness is a real thing and there is a mystical interiority to people thats different from other stuff because if we dont say people are special, how can we make a society or make technologies that serve people?

Does it even matter if we go extinct?

Humans have long been distinguished for their capacity for empathy, kindness, the ability to recognize and respond to emotions in others. We pride ourselves on creativity and innovation, originality, adaptability, reason. A sense of self. We create science, art, music. We dance, we laugh.

But ever since Jane Goodall revealed that chimps could be altruistic, make tools, mourn their dead, all manner of critters, including fish, birds, and giraffes have proven themselves capable of reason, planning ahead, having a sense of fairness, resisting temptation, even dreaming. (Only humans, via their huge misaligned brains, seem capable of truly mass destruction.)

Its possible that we sometimes fool ourselves into thinking animals can do all this because we anthropomorphize them. Its certain that we fool ourselves into thinking machines are our pals, our pets, our confidants. MITs Sherry Turkle calls AI artificial intimacy, because its so good at providing fake, yet convincingly caring, relationshipsincluding fake empathy. The timing couldnt be worse. The earth needs our attention urgently; we should be doing all we can to connect to nature, not intensify our connection to objects that dont care if humanity dies.

More:

The Last Word on AI and the Atom Bomb - WIRED

Read More..

The secret to enterprise AI success: Make it understandable and … – VentureBeat

Head over to our on-demand library to view sessions from VB Transform 2023. Register Here

The promise of artificial intelligence is finally coming to life. Be it healthcare or fintech, companies across sectors are racing to implement LLMs and other forms of machine learning systems to complement their workflows and save time for other more pressing or high-value tasks. But its all moving so fast that many may be ignoring one key question: How do we know the machines making decisions are not leaning towards hallucinations?

In the field of healthcare, for instance, AI has the potential to predict clinical outcomes or discover drugs. If a model veers off-track in such scenarios, it could provide results that may end up harming a person or worse. Nobody would want that.

This is where the concept of AI interpretability comes in. It is the process of understanding the reasoning behind decisions or predictions made by machine learning systems and making that information comprehensible to decision-makers and other relevant parties with the autonomy to make changes.

When done right, it can help teams detect unexpected behaviors, allowing them to get rid of the issues before they cause real damage.

VB Transform 2023 On-Demand

Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.

But thats far from being a piece of cake.

As critical sectors like healthcare continue to deploy models with minimal human supervision, AI interpretability has become important to ensure transparency and accountability in the system being used.

Transparency ensures that human operators can understand the underlying rationale of the ML system and audit it for biases, accuracy, fairness and adherence to ethical guidelines. Meanwhile, accountability ensures that the gaps identified are addressed on time. The latter is particularly essential in high-stakes domains such as automated credit scoring, medical diagnoses and autonomous driving, where an AIs decision can have far-reaching consequences.

Beyond this, AI interpretability also helps establish trust and acceptance of AI systems. Essentially, when individuals can understand and validate the reasoning behind decisions made by machines, they are more likely to trust their predictions and answers, resulting in widespread acceptance and adoption. More importantly, when there are explanations available, it is easier to address ethical and legal compliance questions, be it over discrimination or data usage.

While there are obvious benefits of AI interpretability, the complexity and opacity of modern machine learning models make it one hell of a challenge.

Most high-end AI applications today use deep neural networks (DNNs) that employ multiple hidden layers to enable reusable modular functions and deliver better efficiency in utilizing parameters and learning the relationship between input and output. DNNs easily produce better results than shallow neural networks often used for tasks such as linear regressions or feature extraction with the same amount of parameters and data.

However, this architecture of multiple layers and thousands or even millions of parameters renders DNNs highly opaque, making it difficult to understand how specific inputs contribute to a models decision. In contrast, shallow networks, with their simple architecture, are highly interpretable.

To sum up, theres often a trade-off between interpretability and predictive performance. If you go for high-performing models, like DNNs, the system may not deliver transparency, while if you go for something simpler and interpretable, like a shallow network, the accuracy of results may not be up to the mark.

Striking a balance between the two continues to be a challenge for researchers and practitioners worldwide, especially given the lack of a standardized interpretability technique.

To find some middle ground, researchers are developing rule-based and interpretable models, such as decision trees and linear models, that prioritize transparency. These models offer explicit rules and understandable representations, allowing human operators to interpret their decision-making process. However, they still lack the complexity and expressiveness of more advanced models.

As an alternative, post-hoc interpretability, where one applies tools to explain the decisions of models once they have been trained, can come in handy. Currently, methods like LIME (local interpretable model-agnostic explanations) and SHAP (SHapley Additive exPlanations) can provide insights into model behavior by approximating feature importance or generating local explanations. They have the potential to bridge the gap between complex models and interpretability.

Researchers can also opt for hybrid approaches that combine the strengths of interpretable models and black-box models, achieving a balance between interpretability and predictive performance. These approaches leverage model-agnostic methods, such as LIME and surrogate models, to provide explanations without compromising the accuracy of the underlying complex model.

Moving ahead, AI interpretability will continue to evolve and play a pivotal role in shaping a responsible and trustworthy AI ecosystem.

The key to this evolution lies in the widespread adoption of model-agnostic explainability techniques (applied to any machine learning model, regardless of its underlying architecture) and the automation of the training and interpretability process. These advancements will empower users to understand and trust high-performing AI algorithms without requiring extensive technical expertise. However, at the same time, it will be equally critical to balance the benefits of automation with ethical considerations and human oversight.

Finally, as model training and interpretability become more automated, the role of machine learning experts may shift to other areas, like selecting the right models, implementing on-point feature engineering, and making informed decisions based on interpretability insights.

Theyd still be around, just not for training or interpreting the models.

Shashank Agarwal is manager, decision science at CVS Health.

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even considercontributing an articleof your own!

Read More From DataDecisionMakers

Continued here:

The secret to enterprise AI success: Make it understandable and ... - VentureBeat

Read More..

How to Explain Bitcoin Ordinals to Your Grandmother – nft now

What are Bitcoin Ordinals?

If youve asked yourself this question in the past six months, youre not alone. The rapid rise of the Bitcoin Ordinals ecosystem has enthralled developers, divided the Bitcoin community, and reintroduced the original blockchain to a new generation of users.

In May, Bitcoin became the second-largest blockchain for NFTs with sales volume passing the likes of Solana and Polygon. Even Ethereum co-founder Vitalik Buterin has hailed Ordinals as an organic return of builder culture to Bitcoin.

Ninjalerts CEO Trevor Owens has been a fixture on the movements frontlines, investing in Bitcoin projects as a GP for Bitcoin Frontier Fund and covering all things Ordinals as a co-host for nft nows Not Financial Advice Twitter Space every Tuesday and Thursday from 6:30 pm ET.

In an informative nft now podcast interview, Owens breaks down what Ordinals are, why they matter, and what hes building using the protocol.

Matt Medved: Everyone has a unique story about how they got into the space. Whats yours?

Trevor Owens: Ive been a founder for a decade and a developer in the web2 space. I became interested in Bitcoin in 2013, but I was more enthusiastic about blockchain technology than necessarily the coin itself. Im very practical, and my specialty is go-to-market strategies. I spent a decade coaching founders in the lean startup methodology. I worked with Eric Reese; youll see my name in the thank you section of his book.

The space took a long time to mature, and it wasnt until 2021 that I noticed a tipping point where I saw Metamask, OpenSea, and Uniswap. At this point, I recognized that the technology that I always thought was promising was here in full force.

Id known the founders of Stacks for a long time, being one of their first advisors. When they launched Stacks 2.0, I connected with them. They wanted me to manage their ecosystem venture fund. So, I raised a small fund of four million. Recently, we closed a second round, raising about six to seven million. To date, weve made 50 investments across the Bitcoin ecosystem.

Im also an angel investor in a bunch of different various startups like nft now and others on the ETH side as well. Ive spent a lot of time working with zero-to-one companies, and now I do that professionally at the Bitcoin Frontier Fund. Our focus has expanded from Stacks to Ordinals more broadly.

Im also the CEO of Ninjalerts, an Ethereum analytics trading tool for NFT users that gives you the best push notifications and alerts when things happen on the blockchain. Im living the dual life of a builder and an investor.

Lets jump right into Ordinals. Its been amazing to watch the growth of that ecosystem in such a short amount of time. People are still trying to wrap their heads around it. How would you explain Bitcoin Ordinals to your grandmother?

I would describe it to my grandmother as a way to permanently record data in any form and also a way that makes a certificate of authentication similar to NFTs for that data to track ownership on Bitcoin.

The really innovative thing about Ordinals is that all data is on-chain, and its much less expensive than having a smart contract. Which is important for a chain like Bitcoin, where it is very expensive and block space is limited.

The really innovative thing about Ordinals is that all data is on-chain, and its much less expensive than having a smart contract.

Its a way to record any type of media, and you can also build new kinds of Web3 applications that are also different from how Ethereum applications are structured.That leads to some interesting use cases and ergonomics that actually make decentralized coordination a little bit easier.

Were seeing things that people tried to do on Ethereum that I think are objectively good ideas but were limited by the structure of having smart contracts and a solidity EVM. By just inscribing the raw data, its a little bit easier, and thats the fascinating thing.

Its been interesting to see the various types of inscriptions, whether its a photo, music, or something else, directly on Bitcoin, which wasnt possible before. How did we get here, and why do you think this is so transformative?

We got here kind of by accident. In 2017, SegWit was introduced, which was an upgrade to Bitcoin that created something called the segregated witness. A normal Bitcoin block has something like a one-megabyte limit, but in the witness area of the transaction, you can fit up to an additional three megabytes. So, In total, you can have four megabytes per block.

And in addition, that area of the inscription is actually discounted by 75 percent. It still ends up fitting one megabyte within the Bitcoin block, but since its a total of four, 75 percent is like one megabyte. So, the data is actually stored differently than the rest of the transaction.

That kind of opened up this avenue for really inexpensive permanent data storage in a way that, if you do the math in terms of how much it would cost you to have a Dropbox account for life or run an Amazon Web Services Server for life, theres no way you can get data storage this cheap in a permanent way.

An amazing Bitcoin dev named Casey Rodarmor was a contributor to Bitcoin core over the years, and hes considered one of the top Bitcoin developers in the world. He was working on this project for two years. Casey is a very unique individual. He was born from the Bitcoin community and fits in with some of the laser eye maximalists we like to dunk on.

He was always an instigator and liked to break the rules, and he had development skills and a deep understanding of how Bitcoin works. He spent two years working on Ordinals and Inscriptions. Ordinals refers to Ordinal theory, which is the way that we actually turn Bitcoin or pieces of Bitcoin, Satoshis, into non-fungible tokens that are trackable and tradable throughout the blockchain.

While many of the Bitcoin Maxis tried to cancel Ordinals, it only made it stronger.

On January 20 this year, it just took off like a rocket. It was also the perfect timing. We were just coming back from FTX, and that cascade and we were at a low in the market. It was something that violated all the narratives in the space.

Then you had Udi Wertheimer dropping a four-megabyte Bitcoin block, which was the largest Bitcoin block ever done. I think the real reason it succeeded among the Bitcoin community. Where other innovations have failed or have been attacked by the immune system, so to say, of the Bitcoin Maxi community was because Ordinals are not only a cool use case for a lot of users, but it increased the fees on Bitcoin.

When you have two people fighting for moral authority, in Bitcoin, whoever pays is in the position of superior moral authority. The idea is, if you dont like it, you should pay more. Paying directly to the miners is contributing to the security of Bitcoin. Eventually, the security subsidy on Bitcoin is going to be gone, and it will need to survive the security budget generated by fees.

But historically, fees have been so low on Bitcoin that its been an open-ended question. How are we going to make sure Bitcoin is viable in the long term from people paying enough to support the security and decentralization of the network? Ordinals kind of came in and provided that answer.

While many of the Bitcoin Maxis tried to cancel Ordinals, it only made it stronger. We have had hundreds of developers pour into the space. Weve had a ton of infrastructure, like marketplaces, wallets, explorers, and other tools, developed that just didnt exist four months ago.

I think whenever theres a new and exciting ecosystem, theres a perceived barrier to entry. How can people get started with Bitcoin Ordinals?

The first step would be to get a Bitcoin web wallet similar to Metamask. The most popular one right now is Xverse. Im an investor. You can also check out Hiro, which is an investor in my fund, and UniSat. Those are the top three.

They offer a few different things, but theyre pretty similar. And then, theres multiple marketplaces. So theres Magic Eden, Gamma, Ordinals Wallet, and Ordswap. And go and get an Ordinal. Or inscribe. One of the coolest things about Ordinals is that you dont need to launch a smart contract to make an Ordinal.

The whole system is easier than any NFT marketplace storefront. Some of the wallets have their own inscription service embedded in them, and you can just drag and drop a file, like a JPEG or an MP3, pay a Bitcoin fee, and boom, its forever recorded on Bitcoin. As long as Bitcoin survives, its going to be there.

There are already some quite notable Ordinals projects that have launched. In your view, what are some that the average Web3 lover should have on their radar?

One of the first notable ones, I think, was Bitcoin DeGods. That was a very successful mint where what was originally a popular community on Solana that moved to Ethereum and Polygon inscribed a percentage of their art onto Bitcoin.

There are a lot of projects under the 1,000 mark. The earlier the Ordinal, the earlier and a little bit the more attractive they are. Theres Ord rocks, which were actually like Ether Rocks. Theres Ord Punks, which was using some of the traits of CryptoPunks, but some other traits as well, to make a bunch of CryptoPunk-like PFPs in the sub-1,000 range.

You have a higher barrier to entry, which I think means that you see a lot fewer low-effort projects launched on Ordinals.

I have a colleague who has an Ordinals collection called Diamond Fingers, which is sub-10,000, which are these like 3D middle fingers that say, You cant censor my transactions, and NFTs are fun. Theres Ordinal Maxi Biz, which is one of the most recently successful and hyped projects. Theres On-Chain Monkey, which is another really good one.

And, of course, were doing our own project here for Ninjalerts, which is 1,500 ninja PFPs inscribed on rare pizza sats. So, theres a lot going on. The pace of projects is not like Ethereum, though. You have a higher barrier to entry because, most of the time, the cost of putting Ordinals on the blockchain is held by the project creator.

On Ethereum, usually you just launch the smart contract, and that costs around an ETH to do. But then all of the people who mint pay the cost of gas to mint the NFT. On Ordinals, its the opposite; the creator has to do all of the inscribing. So you have a higher barrier to entry, which I think means that you see a lot fewer low-effort projects launched on Ordinals.

Pizza Ninja is the project that youre launching with Ninjalerts. Its built on rare Satoshis from the Bitcoin Pizza transaction. Tell us a little bit about the project.

Trevor: I think the cool thing about rare sats is that it allows you to do more creative storytelling. So, project creators are now able to combine the medium with which theyre doing their art, the Satoshi, with the art or the story of the art that theyre trying to do.

Pizza sats are not as rare as the uncommon or rare sats. However, its a very historic transaction. It goes back to the first verified Bitcoin transaction ever to buy anything, which is also the first physical item. A guy named Laszlo paid somebody 10,000 Bitcoin for two Papa Johns pizzas.

Its like a holiday every year in the Bitcoin community, the day when that transaction happened. And for people who want to get rare stats, its a really good entry rare sat to get. I love pizza. If you go back to like the history of civilization, a lot of historians say bread is the most important invention in human history in terms of how it formed the basis of the societal construction of families, breaking bread for reconciliation and peace.

So, with an NFT project, I think the most important thing that you want to do is try to create meaningful experiences that can bond together your community. Having a theme to tie people together. I think pizza is universally loved across all cultures. I was thinking about Ninjalerts and pizza sats. And then, of course, the next thing you think of is Ninja Turtles.

So, we just went on this path to try and create a unique project with some amazing art that could bring together our community and also the launch of Ninjalerts on Ordinals on Bitcoin. On Ninjalerts, we provide some of the best NFT analytics and alerts for Ethereum. Of course, we want to move into Ordinals. So, we decided to do this project to help curate our community and help get the word out.

Weve been asking people to make videos on Twitter where they will record themselves eating pizza while wearing a Ninja costume. This is like a hat tip to Udi Wertheimer, who did this with Taproot Wizards.

Were getting videos every single day. Its just been a super fun experience. The way you actually try to distribute the opportunity to participate to those early holders is one of the most important decisions that you have. Being able to select the people who care the most, I think, is going to be the most important sign of long-term health for a project.

This interview transcript has been edited for concision and clarity.

For the full and uncut interview, listen to ourpodcast episodewith Trevor Owens.

Read more here:

How to Explain Bitcoin Ordinals to Your Grandmother - nft now

Read More..

Ethereum scaling protocols drive zero-knowledge proof use in 2023 – FXStreet

Ethereumscaling protocols dominate the use of zero-knowledge rollups (ZK-rollups), with major launches, new research and healthy competition key points in a sector report published by ZKValidator.

The node infrastructure operators State of ZK Q2 report reflects on significant events across the ZK ecosystem, with notable launches of ZK-powered layer 2s highlighting the use of the technology for scaling in comparison with other market segments.

ZK-proofs use cryptography to allow one party to prove to another party that certain information is correct without revealing any data. They present a solution to trust and privacy in digital environments, and have been pivotal in scaling layer-1 blockchain protocols.

A poll put to the 18,000 followers of the Zero Knowledge Podcast provided a sample for a community gauge on the most important applications of ZK-proofs. Of the respondents, 42% selected ZK for Scaling as the segment that would experience the highest growth in the coming 12 months.

ZK use cases. Source: State of ZK Report Q2 2023.

ZK for privacy and identity followed at 23.8% and 22.2%, respectively, while zero-knowledge machine learning was the least selected segment.

The poll reflects the real-world applications of ZK-proofs, with several major Ethereum scaling protocols hitting mainnet in the first half of 2023.

This includes Polygons announcementof its forthcoming 2.0 multichain system, which will use ZK-proofs to transfer assets between networks and zkSync Eras ZK Stack, which will allow developers to build ZK-rollups and proprietary layer-3 hyperchains.

ConsenSys also began onboarding partners to its mainnet Linea network on July 11. Linea is another prominent layer 2that allows developers to build or migrate decentralized applications for Ethereum.

The sector has also attracted investment across several segments, including new zk-based layer 2s and zkEthereum Virtual Machines (zkEVMs), as well as ZK-proofs for off-chain computation.

Ethereum scaling firm StarkWare, which helped pioneer zk technology, and the Ethereum co-founder Vitalik Buterin, invested in Kakarot a zkEVM running on StarkWares technology that aims to build layer-3 applications.

The report also highlights a $115 million raise by Worldcoin to continue the development of its zk-powered digital identity application and ecosystem. Swiss nonprofit Anoma Foundation also features in the report after its latest $25 million raise to continue building its third-generation blockchain architecture for decentralized applications.

Read this article:

Ethereum scaling protocols drive zero-knowledge proof use in 2023 - FXStreet

Read More..