Page 555«..1020..554555556557..560570..»

Key Cryptocurrencies to Keep an Eye on During a Bull Market: Featuring Litecoin and Rebel Satoshi – Finbold – Finance in Bold

Press Releases are sponsored content and not a part of Finbold's editorial content. For a full disclaimer, please . If you encounter any issues, kindly report them to [emailprotected]. Crypto assets/products can be highly risky. Never invest unless youre prepared to lose all the money you invest.

Although Bitcoin remains more popular than its successor, Litecoin (LTC) can easily produce a better bull market. Rebel Satoshi ($RBLZ), an emerging meme coin, is another promising contender.

Thanks to the projects unique rebel-themed goals, investors see it as a top crypto to invest in. Lets study the progress of each name in more detail, starting with Litecoin.

While Litecoin isnt dominating the news like Bitcoin and other top altcoins, the numbers suggest a project bubbling under. Litecoins blockchain mined the 74 millionth LTC on December 19, 2023, with 10 million LTC left before the total supply is hit. However, with quadrennial halving events (most recently in August 2023), the production of LTC will get slower.

Litecoin hit the 196 millionth transaction on the same day, averaging about one million daily. This metric exemplifies the projects unwavering adoption despite receiving less media attention.

Earlier this year, the introduction of Bitcoin Ordinals, unique NFT inscriptions, also inspired Litecoin to have its own. Data suggests that Litecoin has inscribed over 10 million Ordinals since February 2023.

However, LTC has been in a yo-yo on the charts, ranging between $70 and $79 despite this progress. Still, according to forecasts, estimates imply that the price may be in the $80-150 range in 2024.

So, why is Rebel Satoshi the best cryptocurrency to buy today?

Rebel Satoshi is the first meme token destined to correct the unfair financial system using the spirits of Guy Fawkes and Satoshi Nakamoto. Its a project for rebels by rebels that plans to hit a feasible market cap of $100 million.

Simultaneously, Rebel Satoshi will tap into the familiar light side of meme coins while building a like-minded community. Activities in this regard involve interactive quests and virtual gatherings. There is also the Rebel Meme Hall of Fame, an exclusive space for users to garner laughter and popularity with their best community-voted memes.

The key to joining this environment is to own $RBLZ. Its a necessary utility token for the rewarding side of Rebel Satoshi, beginning with the Rebel Artefacts Vault. Members will explore a treasure chest of 9,999 NFTs in collectibles and digital art themed around rebellion. You can also earn brilliant returns passively from locking up your $RBLZ while strengthening the network.

Like Litecoin, $RBLZ proudly has a capped supply (in this case, 250 million). Its also deflationary, as the developers will burn any unsold tokens after presale in a symbolic nod to Fawkes.

The presale is close to reaching the fourth round, selling nearly 70 million tokens. While $RBLZ is worth $0.018, buyers can earn more tokens by taking advantage of a 20% deposit bonus on Rebel Satoshis website.

The presale aftermath will result in further exciting developments for Rebel Satoshi, like releasing the first NFT collection, a community rewards program, and more. Current buyers should see a 38% return by this time, as $RBLZ will be worth $0.025!For the latest updates and more information, be sure to visit the official Rebel Satoshi Presale Website or contact Rebel Red via Telegram

Continue reading here:

Key Cryptocurrencies to Keep an Eye on During a Bull Market: Featuring Litecoin and Rebel Satoshi - Finbold - Finance in Bold

Read More..

2024 is the year for enterprise growth and Satoshi’s legacy: Calvin Ayre – CoinGeek

The year 2024 will be pivotal to enterprise blockchain adoption, says Ayre Group and CoinGeek Founder Calvin Ayre.

In his customary end-of-year message to the blockchain world, Ayre wishes everyone a happy, healthy and prosperous year. BSV blockchain will continue to prove its worth as the only blockchain worthy of large-scale enterprise use, and the only one built to handle the demands of Web3 applications, artificial intelligence (AI), and users best interests with its high-speed capacity and low prices.

Ayre kicks off his message with some of BSV blockchains impressive stats for 2023, in which it continued to push the envelope in terms of whats possible when you build applications on a truly scalable blockchain. After taking 14 years to process its first 2 billion transactions, BSV blockchain had already processed another billion in 2023 by September, including over 128 million in a single day.

This is irrefutable evidence of BSVs capacity for growth. And were only getting started, he said.

The statistics will only become more impressive with the launch of Teranode, BSV blockchains re-write of the Bitcoin protocol software (which still follows all of Satoshi Nakamotos original rules). A six-month sustained test will begin in January 2024. Teranode is built to handle well over a million transactions per second (thats right, per second) and coupled with BSVs ultra-low fee structure, it will enable solutions to a host of heretofore unworkable problems.

BSV blockchain is built for Web3 and AI

Ayre calls out Web3 and artificial intelligence (AI) applications as key opportunities for the BSV blockchain to grow and shine. Although long-promised as world changers, these two fields have often underwhelmed in reality. Web3 promotes a user-centric model of the worldwide web, using blockchain technology to return control over data to its creators. But this has so far been a buzzword with no real-world impact, he says. This is mainly due to people trying to build Web3 solutions on blockchains that cant scale.

Web3 requires vast numbers of unique IP addresses to function well, and only scalable enterprise blockchain technology like BSVtogether with machine learning and IPv6, is the formula that will finally make Web3 a reality.

Scalability and low fees were also vital to building reliable artificial intelligence networks. BSV will lower the barrier for generating the tokens that large language models (LLMs) require to process and generate language.

Theres no denying that AI was a hot chick at the technology bar in 2023. But AI embarrassed nearly as often as it impressed, thanks to its tendency towards hallucinations. Large language models need to train on immutable blockchain-based data to ensure accuracy, Ayre said.

COPA, Satoshis legacy, and the tipping point

Ayre also expresses his confidence that Bitcoin creator Dr. Craig S. Wright will prevail in his legal defense against the anti-competitive Silicon Valley cabal COPA. The so-called open patent alliance (which is anything but that) has sought to nullify Dr. Wrights vast intellectual property library by repudiating his past as the pseudonymous Satoshi Nakamoto, a fight Ayre describes as a David vs. Goliath scenario.

COPA may deny Dr. Wrights identity but they cant challenge the body of already-patented work hes amassed. Ayre believes Dr. Wright will win and wishes him well in the trial (due to begin early in 2024), but whatever the outcome, the cabals members will be dealing with Dr. Wrights legacy for decades to come.

He predicts the trial will signify a tipping point, one where the world begins to take a closer look at the creations that have come out of Dr. Wrights brain.

Digital technology is remorseless in its desire to expand its reach, both in terms of processing power and bandwidth. BSV is equally remorseless in its ability to expand to meet the needs of enterprises, not just today but far into the future.

Mark my words, BSV is building the strong and secure rails on which the worlds data will travel.

Next May, the enterprise blockchain world will gather again in the United Kingdom for London Blockchain Conference 2024, for three jam-packed days of educational presentations and vigorous but healthy debate. CoinGeek agrees with this, and looks forward to seeing everyone there in person.

Watch: London Blockchain Conference showcases real blockchain utility

New to blockchain? Check out CoinGeeks Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.

Read the rest here:

2024 is the year for enterprise growth and Satoshi's legacy: Calvin Ayre - CoinGeek

Read More..

Looking Beyond Bitcoin and Dogecoin: Top Cryptocurrency Picks for 2023 – Finbold – Finance in Bold

Press Releases are sponsored content and not a part of Finbold's editorial content. For a full disclaimer, please . If you encounter any issues, kindly report them to [emailprotected]. Crypto assets/products can be highly risky. Never invest unless youre prepared to lose all the money you invest.

While top altcoins like Dogecoin (DOGE) experience double-digit drops, Bitcoin (BTC) is near its yearly high. This may happen in early January 2024 as investors await the SECs (Securities and Exchange Commission) approval of a Bitcoin exchange-traded fund (ETF). Dogecoin will also be the center of attention soon with two moon-based missions.

Yet, as 2023 is almost over, other investors have eyed a new soon-to-be viral meme coin. Stay tuned to learn more while we observe the latest developments for Bitcoin and Dogecoin.

Bitcoin investors have surely marked January 10, 2024, on their calendars. The stern US regulator, SEC, has to accept or deny applications from financial institutions like ARK Investment, 21Shares, BlackRock, and others by this date. More than ten companies, the most recent being Tidal Investments, have applied for these ETFs in 2024.

The decisive January date has been at the forefront of Bitcoin enthusiasts since the end of September 2023. Fortunately, the consensus remains that Bitcoin should get this much-needed regulatory blessing.

The ETF buzz has contributed significantly to the rise in BTC since mid-October 2023, increasing 61% from $26,540 to $42,780. Forecasts expect the value to surpass the yearly high of $44,747 by the end of 2023. Furthermore, BTC may trade in the minimum range of $45,000-50,000 in 2024.

While DOGE is down 17% from $0.10 to $0.09, the meme coins network metrics suggest a potential recovery.

The latest data from blockchain analytics provider Santiment shows a bullish divergence between Dogecoins active addresses and transaction volume. Santiment also notes an uptrend in retail addresses scooping up DOGE from December 1-18, 2023.

The renewed interest is likely because of the pending Doge-1 mission set to take place on December 23, 2023. American space robotics robotics company Astrobitc is scheduled to send a physical DOGE token to the moon. Interestingly, this is one of two space-related missions which should be positive price catalysts. Estimates indicate a minimum trading range for DOGE of $0.10-0.30 for 2024.

Finally, lets look at the project briefly referenced earlier, Rebel Satoshi ($RBLZ), and why its a top cryptocurrency to buy.

Dogecoin got inspiration from top crypto coins like Bitcoin, which in turn was kickstarted by Satoshi Nakamoto. The latter is the namesake for Rebel Satoshi, a meme token representing the developers desire for decentralized finance.

Rebel Satoshi is building a rebel army to challenge the financial status quo in honor of Nakamoto. Guy Fawkes is another motivation, representing social justice by transferring wealth from oppressive elites to the ordinary citizen.

Rebel Satoshi also celebrates rebel meme culture with additions like the Rebel Meme Hall of Fame. Users can have their rebel-themed memes recognized in a community gallery, garnering popularity and laughter from fellow rebels. Interactive quests and virtual gatherings will also uphold the fun while galvanizing Rebel Satoshis community.

Owning the $RBLZ token is how you become a participant. Impressive returns await holders who can stake this coin. They can also venture into the vibrant Rebel Artefacts Vault, an NFT marketplace for trading 9,999 symbolic collectibles and digital art characters.

The $RBLZ presale is about six weeks old, selling over 67 million tokens. You can own $RBLZ for $0.018, which will have surged 38% afterward with a new price of $0.025.

For the latest updates and more information, be sure to visit the official Rebel Satoshi Presale Website or contact Rebel Red via Telegram

Read the original post:

Looking Beyond Bitcoin and Dogecoin: Top Cryptocurrency Picks for 2023 - Finbold - Finance in Bold

Read More..

Bitcoin Price Prediction: BTC Dips Amid Market Moves and Satoshi Identity Revelations – Cryptonews

In the dynamic world of cryptocurrency, Bitcoin (BTC) has experienced a minor dip, trading at $43,623 with a 0.83% decrease on Saturday. This shift in Bitcoins price coincides with a rise in stocks and a decline in the dollar value as the long holiday weekend approaches.

Meanwhile, in a pivotal legal case, a judge has ordered Craig Wright to pay over $1 million, acknowledging new evidence pertaining to the enigmatic identity of Satoshi Nakamoto, the pseudonymous creator of Bitcoin.

In parallel developments, as meetings with the SEC continue, Hashdex has selected BitGo as the custodian for its anticipated Bitcoin ETF, indicating further institutional moves in the Bitcoin ecosystem.

Global stock indexes mostly rose on Friday, while the US dollar fell to a near five-month low, driven by weaker-than-expected US inflation data.

The Commerce Department reported that US prices decreased in November, marking the first decline in over 3.5 years and bringing the annual inflation rate below 3%.

This spurred investor optimism, as the data suggested a potential Federal Reserve interest rate reduction in the coming year.

The S&P 500 approached its all-time high, signaling an extended bull market. While the Dow Jones experienced a slight dip, both the Nasdaq and S&P 500 registered gains for the seventh consecutive week.

The dollar index dropped to 101.7, reflecting a 2% decline from the previous year. In the cryptocurrency market, Bitcoin saw a minor drop to $43,623, slightly below its eight-month peak, influenced by broader market trends and the dollars decline.

The ongoing legal saga involving Craig Wright and Bitcoin Core developers, coupled with new evidence submissions, is creating a stir in the Bitcoin market.

The postponement of Wrights trial to February 5, 2024, and the judges decision to admit additional evidence, have been met with investor enthusiasm and optimism.

Notably, Wrights order to pay over $1 million in legal fees is seen as a significant development. This trial is closely watched for potential revelations about Bitcoins origins and ownership, leading many to anticipate a positive impact on Bitcoins market value.

Investors are viewing these developments as bullish signals, contributing to a more optimistic forecast for Bitcoin prices.

In a significant move, Hashdex has updated its Bitcoin exchange-traded fund (ETF) proposal, naming BitGo as the custodian for the formerly dubbed Bitcoin Futures ETF. This decision places BitGo ahead of other contenders like Coinbase and Gemini.

The SECs recent meetings with major industry players, including BlackRock and Fidelity, have set a December 29 deadline for finalizing ETF proposals, hinting at a possible approval for spot Bitcoin ETFs.

The anticipation surrounding the SECs decision, particularly with the Ark 21Shares deadline set for January 10, 2024, is generating high expectations.

This positive momentum in the ETF landscape is likely to reflect favorably on Bitcoins market performance, adding to the bullish sentiment in the crypto space.

Stay up-to-date with the world of digital assets by exploring our handpicked collection of the best 15 alternative cryptocurrencies and ICO projects to keep an eye on in 2023. Our list has been curated by professionals from Industry Talk and Cryptonews, ensuring expert advice and critical insights for your cryptocurrency investments.

Take advantage of this opportunity to discover the potential of these digital assets and keep yourself informed.

Disclaimer: Cryptocurrency projects endorsed in this article are not the financial advice of the publishing author or publication cryptocurrencies are highly volatile investments with considerable risk, always do your own research.

Enter your email for our Free Daily Newsletter

A quick 3min read about today's crypto news!

Excerpt from:

Bitcoin Price Prediction: BTC Dips Amid Market Moves and Satoshi Identity Revelations - Cryptonews

Read More..

From GPT-5 to AGI; Sam Altman reveals the most commonly requested features from ChatGPT maker in 2024 | Mint – Mint

OpenAI CEO Sam Altman has listed the most requested features from the ChatGPT maker in 2024. The list of requested features includes many notable mentions, including artificial general intelligence, GPT-5 language model, more personalisation, better GPTs and more.

The suggestions were in response to a question posed by Altman on X (formerly Twitter), where he asked his followers what they would like OpenAI to build or fix in 2024.

will keep reading, and we will deliver on as much as we can (and plenty of other stuff we are excited about and not mentioned here)", the OpenAI CEO promised in an ensuing post on X.

While listing the most requested features of OpenAI, Altman added a caveat about AGI, noting that users will have to be patient and implying that an AI model from the company that reaches the level of AGI in 2024 remains highly unlikely.

Speaking to Time magazine earlier this month, Altman had shed light on the limitless potential of the new technology. He said: I think AGI will be the most powerful technology humanity has yet inventedIf you think about the cost of intelligence and the equality of intelligence, the cost falling, the quality increasing by a lot, and what people can do with that,"

It's a very different world. Its the world that sci-fi has promised us for a long timeand for the first time, I think we could start to see what thats gonna look like." the 38 year old added.

OpenAI announced its GPT-4 Turbo language model at the company's first developer conference in November. The new language model has knowledge of world events up to April 2023 and was seen as a major upgrade over GPT-4, which was released in May.

Meanwhile, at the same event, OpenAI also announced that it would allow users to create their own Generative Pre-trained Transformers (GPTs) and share them publicly. The AI startup had said it would also launch a GPT store to help verified developers monetise their offerings. However, the drama surrounding Sam Altman's sacking and subsequent re-hiring at the AI firm has reportedly led to the GPT store's release being pushed back to 2024.

Unlock a world of Benefits! From insightful newsletters to real-time stock tracking, breaking news and a personalized newsfeed it's all here, just a click away! Login Now!

The rest is here:

From GPT-5 to AGI; Sam Altman reveals the most commonly requested features from ChatGPT maker in 2024 | Mint - Mint

Read More..

AI consciousness: scientists say we urgently need answers – Nature.com

A standard method to assess whether machines are conscious has not yet been devised.Credit: Peter Parks/AFP via Getty

Could artificial intelligence (AI) systems become conscious? A trio of consciousness scientists says that, at the moment, no one knows and they are expressing concern about the lack of inquiry into the question.

In comments to the United Nations, three leaders of the Association for Mathematical Consciousness Science (AMCS) call for more funding to support research on consciousness and AI. They say that scientific investigations of the boundaries between conscious and unconscious systems are urgently needed, and they cite ethical, legal and safety issues that make it crucial to understand AI consciousness. For example, if AI develops consciousness, should people be allowed to simply switch it off after use?

Such concerns have been mostly absent from recent discussions about AI safety, such as the high-profile AI Safety Summit in the United Kingdom, says AMCS board member Jonathan Mason, a mathematician based in Oxford, UK and one of the authors of the comments. Nor did US President Joe Bidens executive order seeking responsible development of AI technology address issues raised by conscious AI systems, Mason notes.

With everything thats going on in AI, inevitably theres going to be other adjacent areas of science which are going to need to catch up, Mason says. Consciousness is one of them.

The other authors of the comments were AMCS president Lenore Blum, a theoretical computer scientist at Carnegie Mellon University in Pittsburgh, Pennsylvania, and board chair Johannes Kleiner, a mathematician studying consciousness at the Ludwig Maximilian University of Munich in Germany.

It is unknown to science whether there are, or will ever be, conscious AI systems. Even knowing whether one has been developed would be a challenge, because researchers have yet to create scientifically validated methods to assess consciousness in machines, Mason says. Our uncertainty about AI consciousness is one of many things about AI that should worry us, given the pace of progress, says Robert Long, a philosopher at the Center for AI Safety, a non-profit research organization in San Francisco, California.

The worlds week on AI safety: powerful computing efforts launched to boost research

Such concerns are no longer just science fiction. Companies such as OpenAI the firm that created the chatbot ChatGPT are aiming to develop artificial general intelligence, a deep-learning system thats trained to perform a wide range of intellectual tasks similar to those humans can do. Some researchers predict that this will be possible in 520 years. Even so, the field of consciousness research is very undersupported, says Mason. He notes that to his knowledge, there has not been a single grant offer in 2023 to study the topic.

The resulting information gap is outlined in the AMCS leaders submission to the UN High-Level Advisory Body on Artificial Intelligence, which launched in October and is scheduled to release a report in mid-2024 on how the world should govern AI technology. The AMCS leaders submission has not been publicly released, but the body confirmed to the authors that the groups comments will be part of its foundational material documents that inform its recommendations about global oversight of AI systems.

Understanding what could make AI conscious, the AMCS researchers say, is necessary to evaluate the implications of conscious AI systems to society, including their possible dangers. Humans would need to assess whether such systems share human values and interests; if not, they could pose a risk to people.

But humans should also consider the possible needs of conscious AI systems, the researchers say. Could such systems suffer? If we dont recognize that an AI system has become conscious, we might inflict pain on a conscious entity, Long says: We dont really have a great track record of extending moral consideration to entities that dont look and act like us. Wrongly attributing consciousness would also be problematic, he says, because humans should not spend resources to protect systems that dont need protection.

If AI becomes conscious: heres how researchers will know

Some of the questions raised by the AMCS comments to highlight the importance of the consciousness issue are legal: should a conscious AI system be held accountable for a deliberate act of wrongdoing? And should it be granted the same rights as people? The answers might require changes to regulations and laws, the coalition writes.

And then there is the need for scientists to educate others. As companies devise ever-more capable AI systems, the public will wonder whether such systems are conscious, and scientists need to know enough to offer guidance, Mason says.

Other consciousness researchers echo this concern. Philosopher Susan Schneider, the director of the Center for the Future Mind at Florida Atlantic University in Boca Raton, says that chatbots such as ChatGPT seem so human-like in their behaviour that people are justifiably confused by them. Without in-depth analysis from scientists, some people might jump to the conclusion that these systems are conscious, whereas other members of the public might dismiss or even ridicule concerns over AI consciousness.

To mitigate the risks, the AMCS comments call on governments and the private sector to fund more research on AI consciousness. It wouldnt take much funding to advance the field: despite the limited support to date, relevant work is already underway. For example, Long and 18 other researchers have developed a checklist of criteria to assess whether a system has a high chance of being conscious. The paper1, published in the arXiv preprint repository in August and not yet peer reviewed, derives its criteria from six prominent theories explaining the biological basis of consciousness.

Theres lots of potential for progress, Mason says.

Original post:

AI consciousness: scientists say we urgently need answers - Nature.com

Read More..

Silicon Landlords: On the Narrowing of AI’s Horizon – The Nation

Culture / December 19, 2023

The one thing science fiction couldnt imagine is the world we have now: the near-complete control of Artificial Intelligence by a few corporations whose only goal is profit.

As HAL 9000, the true star of Stanley Kubricks landmark film, 2001: A Space Odyssey, died a silicon death by memory module removal, the machine, reduced to its infant state (the moment it became operational) recited:

Good afternoon, gentlemen. I am a HAL 9000 computer. I became operational at the H.A.L. plant in Urbana, Illinois on the 12th of January 1992. My instructor was Mr. Langley, and he taught me to sing a song

In HALs fictional biography, written by Arthur C Clarke for both the films script and its novelization, HAL, a Heuristic Algorithmic Logic machine, was theorized, engineered, and built at the University of Illinoiss Coordinated Science Laboratory, where the real Illinois Automatic Computer (ILLIAC) supercomputers were built from the 1950s until the 1970s. Embedded within the idea of HAL is an assumption: that the artificial intelligence (AI) research programs of the mid-to-late 20th century, centered on universities, scientific inquiry (and yes, military imperatives) would continue uninterrupted into the future, eventually producing thinking machines that would be our partners.

Ironically for a work of the imagination, it turns out that what was unimaginable in the late 1960s for the makers of 2001 was the eventual near-complete control of the field of AI by a small group of North American corporationsthe Silicon Valley triumvirate of Amazon, Microsoft, and Googlewhose only goal (hyped claims and declarations of serving humanity aside) is profit. These companies claim to be producing HAL-esque machines (which, they suggest, exhibit signs of AGI: artificial general intelligence) but are actually producing narrow systems that enable the extraction of profit for these digital landlords while allowing them to maintain control over access to a technology they dominate and tirelessly work to insert into every aspect of life.

On December 5 of this year, MIT Technology Review published an article titled Make no mistakeAI is owned by Big Tech, written by Amba Kak, Sarah Myers West, and Meredith Whittaker. The article, focused on the political economy and power relations of the AI industry, begins with this observation:

Put simply, in the context of the current paradigm of building larger- and larger-scale AI systems, there is no AI without Big Tech. With vanishingly few exceptions, every startup, new entrant, and even AI research lab is dependent on these firms. All rely on the computing infrastructure of Microsoft, Amazon, and Google to train their systems, and on those same firms vast consumer market reach to deploy and sell their AI products.

There is no AI without Big Tech. Before the era of Silicon Valley dominance, when AI research programs were largely funded by a combination of government agencies such as DARPA and universities, driven, at least at the level of researchers, by scientific inquiry (it was far from a utopia; Cold War imperatives were always a significant factor), the financing required to build the systems researchers used and the direction of research were subject to public scrutiny as an option if not always in practice.

Today, the most celebrated and hyped methods, such as the resource hungry Large Language Models (LLM)ChatGPT, Googles recently released Gemini and Amazons Qare the product of a concentration of capital and computational resources put into service to enhance the profit and market objectives of private entities such as Microsoft (the primary source of funding for OpenAI), completely beyond the reach of public scrutiny.

The Greek economist Yanis Varoufakis uses the term, technofeudalism to describe what he sees as the tech industrys post-capitalist nature (closer in character to feudal lords, only this time with data centers, rather then walled castles or robber barons). I have problems with his argument, but I will grant Varoufakis one key point: the industrys wealth and power are indeed built almost entirely on a rentier model that places the largest firms between us and the things we need. Rather than controlling land (although that too is a part of the story: Data centers require lots of land), the industry controls access to our entertainments, our memories, and our means of communication.

To this list we can add the collection of algorithmic techniques called AI, promoted as essential and inevitable, owned and commanded by the cloud giants who have crowded out earlier research efforts with programs requiring staggering amounts of data, computational power, and resources. As 20th century Marxists were fond of saying, it is no accident that the very methods that depend on techniques controlled at-scale by the tech giants are the ones we are told we cant live without (how many times have you been told ChatGPT was the futureas inevitable as death and taxes?).

Continuing their analysis, the authors of the MIT article describe the power relationshipsseldom discussed in most breathlessly adoring tech media accountsthat shape how the AI industry actually works:

Microsoft now has a seat on OpenAIs board, albeit a nonvoting one. But the true leverage that Big Tech holds in the AI landscape is the combination of its computing power, data, and vast market reach. In order to pursue its bigger-is-better approach to AI development, OpenAI made a deal. It exclusively licenses its GPT-4 system and all other OpenAI models to Microsoft in exchange for access to Microsofts computing infrastructure.

For companies hoping to build base models, there is little alternative to working with either Microsoft, Google, or Amazon. And those at the center of AI are well aware of this

A visit to the Microsoft website for what it calls its Azure OpenAI Service (the implementation of OpenAIs platform via Microsofts Azure cloud computing service) shows the truth of the statement, There is little alternative to working with either Microsoft, Google, or Amazon. Computing hardware for AI research costs oceans of money (Microsofts $10 billion investment in OpenAI is an example) and demands constant maintenancethings smaller firms can scarcely afford. By offering a means through which start-ups and, really, all but the deepest-pocketed organizations can get access to what are considered cutting-edge methods, Microsoft and its fellow travelers have become the center of the AI ecosystem. The AI in your school, hospital, or police force (the list goes on) can, like roads leading to Rome, be traced back to Microsoft et al.

In the fictional world of HAL 9000, thinking machinesbuilt at universities, watched over by scientists and engineers, disconnected from profit incentivesemerged onto the world stage, becoming a part of life, even accompanying us to the stars. In our world, now 22 years past the 2001 imagined in the film, a small and unregulated group of corporations steer the direction of research, own the computers used for that research, and sell the results as products the world cant do without. These productsgenerative AI image generators like Dall-E, text calculators like ChatGPT, and a host of other systems, all derivativeare being pushed into the world, not as partners as with the fabled HAL but as profit vectors.

Popularswipe left below to view more authorsSwipe

Power, like life itself, is not eternal. The power of the tech industry, facilitated by the purposeful neglect of unconcerned, mis- or poorly informed governments and modern laissez-faire policies, is not beyond challenge. There are groups, such as the Distributed AI Research Institute, and even legislation, like the flawed EU AI Act, that offer glimpses of a different approach.

To borrow from linguistics professor Emily Bender, we must resist the urge to be impressed and focus our thoughts and efforts instead on ensuring that the tech industry and the AI systems it sells are firmly brought under democratic control.

The alternative is a chaotic dystopia in which were all at the mercy of the profit-driven whims of a few companies. This isnt a future anyone deserves. Not even Elon Musks (dwindling) army of reality-challenged fans.

From now until the end of the year, all donations up to $100,000 will be matched up by a generous supporter. Donate to support The Nations independent journalism today and double your impact!

Dwayne Monroe is a cloud architect, Marxist tech analyst, and Internet polemicist based in Amsterdam. He is currently writing a book, Attack Mannequins, exploring the use of AI as propaganda.

The story you just read is made possible by a dedicated community of Nation reader-supporters who give to support our progressive, independent journalism. A generous supporter has agreed to match all donations up to $100,000 from now until the end of the year. Make a contribution before 12/31 and double your impact. Donate today!

Though hyped in the media as the latest thing, the images generated by AI art are actually old, trapping the viewer in a time loop of kitsch.

Dwayne Monroe

Declarations of sentience are wildly premature. But the dangers AI poses to labor are very real.

Editorial/Dwayne Monroe

Visit link:

Silicon Landlords: On the Narrowing of AI's Horizon - The Nation

Read More..

The Era of AI: 2023’s Landmark Year – CMSWire

The Gist

As we approach the end of another year, it's becoming increasingly clear that we are navigating through the burgeoning era of AI, a time that is reminiscent of the early days of the internet, yet poised with a transformative potential far beyond. While we might still be at what could be called the "AOL stages" of AI development, the pace of progress has been relentless, with new applications and capabilities emerging daily, reshaping every facet of our lives and businesses.

In a manner once attributed to divine influence and later to the internet itself, AI has become a pervasive force it touches everything it changes, and indeed, changes everything it touches. This article will recap the events that impacted the world of AI in 2023, including the evolution and growth of AI, regulations, legislation and petitions, the saga of Sam Altman, and the pursuit of Artificial General Intelligence (AGI).

The latest in the saga of AI began late last year, on Nov. 30, 2022, when OpenAI announced the release of ChatGPT 3.5, the second major release of the GPT language model capable of generating human-like text, which signified a major step in improving how we communicate with machines. Since then, its been a very busy year for AI, and there has rarely been a week that hasnt seen some announcement relating to it.

The first half of 2023 was marked by a series of significant developments in the field of AI, reflecting the rapid pace of innovation and its growing impact across various sectors. So far, the rest of the year hasnt shown any signs of slowing down. In fact, the emergence of AI applications across industries seems to have increased its pace. Here is an abbreviated timeline of the major AI news of the year:

February 13, 2023: Stanford scholars developed DetectGPT, the first in a forthcoming line of tools designed to differentiate between human and AI-generated text, addressing the need for oversight in an era where discerning the source of information is crucial. The tool came after the release of ChatGPT 3.5 prompted teachers and professors to become alarmed at the potential of ChatGPT to be used for cheating.

February 23, 2023: The launch of an open-source project called AgentGPT, which runs in a browser and uses OpenAI's ChatGPT to execute complex tasks, further demonstrated the versatility and practical applications of AI.

February 24, 2023: Meta, formerly known as Facebook, launched Llama, a large language model with 65 billion parameters, setting new benchmarks in the AI industry.

March 14, 2023: OpenAI released GPT 4, a significantly enhanced model over its predecessor, ChatGPT 3.5, raising discussions in the AI community about the potential inadvertent achievement of Artificial General Intelligence (AGI).

March 20, 2023: Studies examined the responses of GPT 3.5 and GPT 4 to clinical questions, highlighting the need for refinement and evaluation before relying on AI language models in healthcare. GPT 4 outperformed previous models, achieving an average score of 86.65% and 86.7% on the Self-Assessment and Sample Exam of the USMLE tests, with GPT 3.5 achieving 53.61% and 58.78%.

March 21, 2023: Googles focus on AI during its Google I/O event included the release of Bard, a ChatGPT competitor, and other significant announcements about its forthcoming large language models and integrations into Google Workspace and Gmail.

March 21, 2023: Nvidia's announcement of Picasso Cloud Services for creating large language and visual models, aimed at larger enterprises, underscored the increasing interest of major companies in AI technologies.

March 23, 2023: OpenAI's launch of Plugins for GPT expanded the capabilities of GPT models, allowing them to connect to third-party services via an API.

March 30, 2023: AutoGPT was released, with the capability to execute and improve its responses to prompts autonomously. This advancement in AI technology showcased a significant step toward greater autonomy in AI systems, and came with the ability to be installed on users local PCs, allowing individuals to have a large language model AI chat application in their homes without the need for internet access.

April 4, 2023: An unsurprising study discovered that participants could only differentiate between human and AI-generated text with about 50% accuracy, similar to random chance.

April 13, 2023: AWS announced Bedrock, a service making Fundamental AI Models from various labs accessible via an API, streamlining the development and scaling of generative AI-based applications.

May 23, 2023: OpenAI revealed plans to enhance ChatGPT with web browsing capabilities using Microsoft Bing and additional plugins for Plus subscribers, which would initially become available to ChatGPT Plus subscribers.

July 18, 2023: In a study, ChatGPT, particularly GPT 4, was found to be able to outperform medical students in responding to complex clinical care exam questions.

August 6, 2023: The EU AI Act, announced on this day, was one of the world's first legal frameworks for AI, and saw major developments and negotiations in 2023, with potential global implications, though it was still being hashed out in mid-December.

September 8, 2023: A study revealed that AI detectors, designed to identify AI-generated content, exhibit low reliability, especially for content created by non-native English speakers, raising ethical concerns. This has been an ongoing concern for both teachers and students, as these tools regularly present original content as being produced by AI, and AI-generated content as being original.

September 21, 2023: OpenAI announced that Dall-E 3, its text-to-image generation tool, would soon be available to ChatGPT Plus users.

November 4, 2023: Elon Musk announced the latest addition to the world of generative AI: Grok. Musk said that Grok promises to "break the mold of conventional AI," is said to respond with provocative answers and insights, and will welcome all manner of queries.

November 21, 2023: Microsoft unveiled Bing Chat 2.0 now called Copilot a major upgrade to its own chatbot platform, which leverages a hybrid approach of combining generative and retrieval-based models to provide more accurate and diverse responses.

November 22, 2023: With the release of Claude 2.1, Anthropic announced an expansion in Claude's capabilities, enabling it to analyze large volumes of text rapidly, a development favorably compared to the capabilities of ChatGPT.

December 6, 2023: Google announces its OpenAI rival, Gemini, which is multimodal, can generalize and seamlessly understand, operate across and combine different types of information, including text, images, audio, video and code.

These were only a very small portion of 2023s AI achievements and events, as nearly every week a new generative AI-driven application was being announced, including specialized AI-driven chatbots for specific use cases, applications, and industries. Additionally, there was often news of interactions with and uses of AI, AI jailbreaks, predictions about the potential dystopian future it may bring, proposals of regulations, legislation and guardrails, and petitions to stop developing the technology.

Shubham A. Mishra, co-founder and global CEO at AI marketing pioneer Pixis, told CMSWire that in 2023, the world focused on building the technology and democratizing it. "We saw people use it, consume it, and transform it into the most effective use cases to the point that it has now become a companion for them," said Mishra. "It has become such an integral part of its user's day-to-day functions that they don't even realize they are consuming it."

Many view 2023 as the year of generative AI but we are only beginning to tap into the potential applications of the technology. We are still trying to harness the full potential of generative AI across different use cases. In 2024, the industry will witness major shifts, be it a rise or fall in users and applications, said Mishra. There may be a rise in the number of users, but there will also be a second wave of Generative AI innovations where there will be an incremental rise in its applications.

Related Article:Harnessing AI: Top Use Cases for Digital Commerce

Anthony Yell, chief creative officer at interactive agency, Razorfish, told CMSWire that as a chief creative officer, he and his team have seen generative AI stand out by democratizing creativity, making it more accessible and enhancing the potential for those with skills and experience to reach new creative heights. "This technology has introduced the concept of a 'creative partner' or 'creative co-pilot,' revolutionizing our interaction with creative processes."

Yell believes that this era is about marrying groundbreaking creativity with responsible innovation, ensuring that AI's potential is harnessed in a way that respects brand identity and maintains consumer trust. This desire for responsibility and trust is something that is core to the acceptance of what has been and will continue to be a very disruptive technology. As such, 2023 has included many milestones in the quest for AI responsibility, safety, regulations, ethics, and controls. Here are some of the most impactful regulatory AI events in 2023.

February 28, 2023: Former Google engineer Blake Lemoine, who was fired in 2022 for going to the press with claims that Google LaMDA is actually sentient, was back in the news doubling down on his claim.

March 22, 2023: A group of technology and business leaders, including Elon Musk, Steve Wozniak and tech leaders from Meta, Google and Microsoft, signed an open letter hosted by the Future of Life Institute urging AI organizations to pause new developments in AI, citing risks to society. The letter stated that "we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT 4."

May 16, 2023: Sam Altman, CEO and co-founder of OpenAI, spoke with members of Congress to regulate AI due to the inherent risks that are posed by the technology.

May 30, 2023: AI industry leaders and researchers signed a statement hosted by the Center for AI Safety warning of the "extinction risk posed by AI." The statement said that Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war, and was signed by OpenAI CEO Sam Altman, Geoffrey Hinton, Google DeepMind and Anthropic executives and researchers, Microsoft CTO Kevin Scott, and security expert Bruce Schneier.

October 31, 2023: President Biden signed the sweeping Executive Order on Artificial Intelligence, which was designed to establish new standards for AI safety and security, protect Americans privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, and advance American leadership around the world.

November 14, 2023: The DHS Cybersecurity and Infrastructure Security Agency (CISA) released its initial Roadmap for Artificial Intelligence, leading the way to ensure safe and secure AI development in the future. The CISA AI roadmap came in response to President Biden's October 2023 Executive Order on Artificial Intelligence.

December 11, 2023: The European Commission and the bloc's 27 member countries reached a deal on the world's first comprehensive AI rules, opening the door for the legal oversight of AI technology.

Rubab Rizvi, chief data scientist at Brainchild, a media agency affiliated with the Publicis Groupe, told CMSWire that from predictive analytics to seamless automation, the rapid embrace of AI has not only elevated efficiency but has also opened new frontiers for innovation, shaping a dynamic landscape that keeps us on our toes and fuels the excitement of what's to come.

The generative AI we've come to embrace in 2023 hasn't just been about enhancing personalization, she said. "It's becoming your digital best friend, offering tailored experiences that elevate brand engagement to a new level," said Rizvi. "This calls for proper governance and guardrails. As generative AI can potentially expose new previously inaccessible data, we must ensure that we are disciplined in protecting ourselves and our unstructured data." Rizvi aptly reiterated what many have said throughout the year: Dont blindly trust the machine."

Related Article: The Evolution of AI Chatbots: Past, Present and Future

OpenAI was the organization that officially started the era of AI with the announcement and introduction of ChatGPT 3.5 in 2022. In the year that followed, OpenAI ceaselessly worked to continue the evolution of AI, and has been no stranger to its share of both conspiracies and controversies. This came to a head late in the year, when the organization surprised everyone with news regarding its CEO, Sam Altman.

November, 17, 2023: The board of OpenAI fired co-founder and CEO Sam Altman, stating that a review board found he was not consistently candid in his communications and that "the board no longer has confidence in his ability to continue leading OpenAI.

November, 20, 2023: Microsoft hired former OpenAI CEO Sam Altman and co-founder Greg Brockman, with Microsoft CEO Satya Nadella announcing that Altman and Brockman would be joining to lead Microsofts new advanced AI research team, and that Altman would become CEO of the new group.

November 22, 2023: OpenAI rehired Sam Altman as its CEO, stating that it had "reached an agreement in principle for Sam Altman to return to OpenAI as CEO," along with significant changes in its non-profit board.

November 24, 2023: It was suggested that prior to Altmans firing, OpenAI researchers sent a letter to its board of directors warning of a new AI discovery that posed potential risks to humanity. The discovery, which has been referred to as Project Q*, was said to be a breakthrough in the pursuit of AGI, and reportedly influenced the board's firing of Sam Altman because of concerns that he was rushing to commercialize the new AI advancement without fully understanding its implications.

The quest for AGI, (something that Microsoft has since said could take decades), is an advanced form of AI characterized by self-learning capabilities and proficiency in a wide range of tasks, and stands as a cornerstone objective in the AI field. AGI could potentially seek to develop machines that mirror human intelligence, with the ability to understand, learn, and adeptly apply knowledge across diverse contexts, surpassing human performance in various domains.

Reflecting on 2023, we have witnessed a landmark year in AI, marked by groundbreaking advancements. Amidst these innovations, the year has also been pivotal in addressing the ethical, safety, and regulatory aspects of AI. As we conclude the year, the progress in AI not only showcases human ingenuity but also sets the stage for future challenges and opportunities, emphasizing the need for responsible stewardship of this transformative yet disruptive technology.

See the article here:

The Era of AI: 2023's Landmark Year - CMSWire

Read More..

Forget Dystopian Scenarios AI Is Pervasive Today, and the Risks Are Often Hidden – The Good Men Project

ByAnjana Susarla, Michigan State University

The turmoil at ChatGPT-maker OpenAI, bookended by the board of directors firing high-profile CEO Sam Altman on Nov. 17, 2023, and rehiring him just four days later, has put a spotlight on artificial intelligence safety and concerns about the rapid development of artificial general intelligence, or AGI. AGI is loosely defined as human-level intelligence across a range of tasks.

The OpenAI board stated that Altmans termination was for lack of candor, but speculation has centered on a rift between Altman and members of the board over concerns that OpenAIs remarkable growth products such as ChatGPT and Dall-E have acquired hundreds of millions of users worldwide has hindered the companys ability to focus on catastrophic risks posed by AGI.

OpenAIs goal of developing AGI has become entwined with the idea of AI acquiring superintelligent capabilities and the need to safeguard against the technology being misused or going rogue. But for now, AGI and its attendant risks are speculative. Task-specific forms of AI, meanwhile, are very real, have become widespread and often fly under the radar.

As a researcher of information systems and responsible AI, I study how these everyday algorithms work and how they can harm people.

AI plays a visible part in many peoples daily lives, from face recognition unlocking your phone to speech recognition powering your digital assistant. It also plays roles you might be vaguely aware of for example, shaping your social media and online shopping sessions, guiding your video-watching choices and matching you with a driver in a ride-sharing service.

AI also affects your life in ways that might completely escape your notice. If youre applying for a job, many employers use AI in the hiring process. Your bosses might be using it to identify employees who are likely to quit. If youre applying for a loan, odds are your bank is using AI to decide whether to grant it. If youre being treated for a medical condition, your health care providers might use it to assess your medical images. And if you know someone caught up in the criminal justice system, AI could well play a role in determining the course of their life.

Many of the AI systems that fly under the radar have biases that can cause harm. For example, machine learning methods use inductive logic, which starts with a set of premises, to generalize patterns from training data. A machine learning-based resume screening tool was found to be biased against women because the training data reflected past practices when most resumes were submitted by men.

The use of predictive methods in areas ranging from health care to child welfare could exhibit biases such as cohort bias that lead to unequal risk assessments across different groups in society. Even when legal practices prohibit discrimination based on attributes such as race and gender for example, in consumer lending proxy discrimination can still occur. This happens when algorithmic decision-making models do not use characteristics that are legally protected, such as race, and instead use characteristics that are highly correlated or connected with the legally protected characteristic, like neighborhood. Studies have found that risk-equivalent Black and Latino borrowers pay significantly higher interest rates on government-sponsored enterprise securitized and Federal Housing Authority insured loans than white borrowers.

Another form of bias occurs when decision-makers use an algorithm differently from how the algorithms designers intended. In a well-known example, a neural network learned to associate asthma with a lower risk of death from pneumonia. This was because asthmatics with pneumonia are traditionally given more aggressive treatment that lowers their mortality risk compared to the overall population. However, if the outcome from such a neural network is used in hospital bed allocation, then those with asthma and admitted with pneumonia would be dangerously deprioritized.

Biases from algorithms can also result from complex societal feedback loops. For example, when predicting recidivism, authorities attempt to predict which people convicted of crimes are likely to commit crimes again. But the data used to train predictive algorithms is actually about who is likely to get re-arrested.

The Biden administrations recent executive order and enforcement efforts by federal agencies such as the Federal Trade Commission are the first steps in recognizing and safeguarding against algorithmic harms.

And though large language models, such as GPT-3 that powers ChatGPT, and multimodal large language models, such as GPT-4, are steps on the road toward artificial general intelligence, they are also algorithms people are increasingly using in school, work and daily life. Its important to consider the biases that result from widespread use of large language models.

For example, these models could exhibit biases resulting from negative stereotyping involving gender, race or religion, as well as biases in representation of minorities and disabled people. As these models demonstrate the ability to outperform humans on tests such as the bar exam, I believe that they require greater scrutiny to ensure that AI-augmented work conforms to standards of transparency, accuracy and source crediting, and that stakeholders have the authority to enforce such standards.

Ultimately, who wins and loses from large-scale deployment of AI may not be about rogue superintelligence, but about understanding who is vulnerable when algorithmic decision-making is ubiquitous.

Anjana Susarla, Professor of Information Systems, Michigan State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

***

Premium Members get to view The Good Men Project with NO ADS. Need more info? A complete list of benefits is here.

Photo credit: iStockPhoto.com

Go here to read the rest:

Forget Dystopian Scenarios AI Is Pervasive Today, and the Risks Are Often Hidden - The Good Men Project

Read More..

The Most Important AI Innovations of 2024 | by AI News | Dec, 2023 – DataDrivenInvestor

Photo by Johannes Plenio on Unsplash

In the fast-paced realm of artificial intelligence (AI), 2024 will be a transformative year, marking a profound shift in our understanding of AI capabilities and its real-world applications. While some developments have been a culmination of years of progress, others have emerged as groundbreaking innovations. In this article, well explore the most important AI innovations that will define 2024.

The term multimodality may sound technical, but its implications are revolutionary. In essence, it refers to an AI systems ability to process diverse types of data, extending beyond text to include images, video, audio, and more. In 2023, the public witnessed the debut of powerful multimodal AI models, with OpenAIs GPT-4 leading the way. This model allows users to upload not only text but also images, enabling the AI to see and interpret visual content.

Google DeepMinds Gemini, unveiled in December, further advanced multimodality, showcasing the models capacity to work with images and audio. This breakthrough opens doors to endless possibilities, such as seeking dinner suggestions based on a photo of your fridge contents. According to Shane Legg, co-founder of Google DeepMind, the shift towards fully multimodal AI marks a significant landmark, indicating a more grounded understanding of the world.

The promise of multimodality extends beyond mere utility; it enables models to be trained on diverse data sets, including images, video, and audio. This wealth of information enhances the models capabilities, propelling them towards the ultimate goal of artificial general intelligence that matches human intellect.

See more here:

The Most Important AI Innovations of 2024 | by AI News | Dec, 2023 - DataDrivenInvestor

Read More..