Page 32«..1020..31323334..4050..»

NIST will fire the starting gun in the race to quantum encryption – Nextgov/FCW

As the National Institute of Standards and Technology is slated to soon debut the first round of encryption algorithms it has deemed suited for the potential arrival of a viable quantum computer, experts have advice for organizations: know your code.

The need for strong cryptographic governance ahead of migrating digital networks to a post-quantum standard will be a major component to updated cybersecurity best practices, as both public and private sectors begin to reconcile their network security with new algorithmic needs.

Matthew Scholl, the chief of the computer security division in the National Institute of Standards and Technologys Information Technology Laboratory, said that understanding what a given organizations security capabilities are will offer insight into what aspects of a network should transition first.

Deep understanding of what current encryption methods do and precisely where they are will be a fundamental aspect of correctly implementing the three forthcoming quantum-resistant algorithms.

With that information, you should then be able to prioritize what to change and when, and you should plan for the long term changes and updates going forward, Scholl told Nextgov/FCW.

Scott Crowder, vice president for IBM Quantum Adoption and Business Development, echoed Scholls points on creating a cryptographic inventory to ensure the algorithms are properly configured. Crowder said that while overhauling encryption code is a comprehensive transition, understanding what needs to change can be difficult based on who wrote the code in the first place.

It's a painbecause it's actually at two levels, Crowder told Nextgov/FCW. First you get all the code that you've written, but then you've got all the rest of your IT supply chain that vendors provide.

Based on client conversations, Crowder estimates that 20% of the transformation problem hinges on an entitys internal code, while the remaining 80% is ensuring the vendors in their supply chains have correctly implemented NISTs new algorithms.

From our experience, and doing some work with clients, typically for one application area, it's like three to six months to discover the environment and do some of the basic remediation, he said. But, you know, that's like a small part of the elephant.

In addition to creating a comprehensive cryptographic inventory that can determine which code should be updated, Scholl said that cybersecurity in a quantum-ready era needs to be versatile.

You need to build your systems with flexibility so that it can change, he said. Don't put something that's [going] to be the next generation's legacy. Build something that is agile and flexible.

The debut of the three standardized post-quantum algorithms ML-KEM, CRYSTALS-Dilithium, and Sphinx Plus will enable classical computers to keep data encrypted against a future fault-tolerant, quantum-powered computer. During their implementation processes, Scholl said that organizations need to both continue monitoring the configuration of the newly implemented algorithms as well as consistently test for vulnerabilities.

Scholl said that the fourth algorithm, Falcon, which was selected as a winning algorithm in 2022 along with the other three, will be released for implementation later this year.

Despite the milestone in quantum cryptography readiness, Crowder notes that this is just the beginning for a new era of cybersecurity hygiene.

You can think of the NIST standardization as basically the starting gun, he said. But there's a lot of work to be done on taking those standards, making sure that all the open source implementations, all the proprietary implementations get done, and then rippling through and doing all the hard work in terms of doing the transformation upgrade.

See the original post:
NIST will fire the starting gun in the race to quantum encryption - Nextgov/FCW

Read More..

Quantinuum and Science and Technology Facilities Council (SFTC) Hartree Center Partner to Advance Quantum Innovation and Development in the UK -…

Quantinuum, the worlds largest integrated quantum computing company, has signed a Joint Statement of Endeavour with the STFC Hartree Center, one of Europes largest supercomputing centers dedicated to industry engagement. The partnership will provide UK industrial and scientific users access to Quantinuums H-Series, the worlds highest-performing trapped-ion quantum computers, via the cloud and on-premise.

Research and scientific discovery are central to our culture at Quantinuum, and we are proud to support the pioneers at the Hartree Center, said Raj Hazra, CEO of Quantinuum. As we accelerate quantum computing, the Hartree Centerand the UK quantum ecosystem will be on the forefront of building solutions powered by quantum computers at scale.

Both organisations aim to support UK businesses and research organizations in exploring quantum advantage in quantum chemistry, computational biology, quantum artificial intelligence and quantum-augmented cybersecurity. The UK has a strong global reputation in each domain, and quantum computing is expected to accelerate development in the coming years.

Quantinuums H-Series hardware will benefit scientists across various areas of research, including exascale computing algorithms, fusion energy development, climate resilience and more, said Kate Royse, Director of the STFC Hartree Center. This partnership also furthers our five-year plan to unlock the high growth potential of advanced digital technologies for UK industry.

The Hartree Centeris part of theScience and Technology Facilities Council (STFC) withinUK Research and Innovation building on a wealth of establishedscientific heritage and a network of international expertise. The centers experts collaboratewith industry and the research community to explore the latest technologies, upskill teamsand apply practical digitalsolutions across supercomputing, data science and AI.

Quantinuums H-Series quantum computers are the highest-performing in the world, having consistently held the world record for quantum volume, a widely used benchmark for quantum computing performance, for over three years and currently standing at 220.

In April 2024, Quantinuum and Microsoftreported a breakthrough demonstrationof four reliable logical qubits using quantum error correction an important technology necessary for practical quantum computing. During the same month, Quantinuum extended its industry leadership with its H-Series computer becoming thefirst to achieve three 9s 99.9 % two-qubit gate fidelity across all qubit pairs in a production device, a critical milestone that enables fault-tolerant quantum computing.

This achievement was immediately available to Quantinuum customers, who depend on using the very best quantum hardware and software, enabling them to push the boundaries on new solutions in areas such as materials development, drug discovery, machine learning, cybersecurity, and financial services.

Quantinuum formerly known as Cambridge Quantum prior to its 2021 combination with Honeywell Quantum Solutions was one of the UK governments delivery partners, following the 2014 launch of the National Quantum Technologies Programme. Cambridge Quantum ran the Quantum Readiness Programme for several years to inspire UK business and industry to invest in quantum computing to explore the potential use cases of this revolutionary technology.

Earlier this year, Quantinuumwas selected as a winnerin the 15 m SBRI Quantum Catalyst Fund, to support the UK Government in delivering the benefits of quantum technologies, with an initial focus on simulating actinide chemistry using quantum computers.

Read more:
Quantinuum and Science and Technology Facilities Council (SFTC) Hartree Center Partner to Advance Quantum Innovation and Development in the UK -...

Read More..

DeepMinds PEER scales language models with millions of tiny experts – VentureBeat

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More

Mixture-of-Experts (MoE) has become a popular technique for scaling large language models (LLMs) without exploding computational costs. Instead of using the entire model capacity for every input, MoE architectures route the data to small but specialized expert modules. MoE enables LLMs to increase their parameter while keeping inference costs low. MoE is used in several popular LLMs, including Mixtral, DBRX, Grok and reportedly GPT-4.

However, current MoE techniques have limitations that restrict them to a relatively small number of experts. In a new paper, Google DeepMind introduces Parameter Efficient Expert Retrieval (PEER), a novel architecture that can scale MoE models to millions of experts, further improving the performance-compute tradeoff of large language models.

The past few years have shown that scaling language models by increasing their parameter count leads to improved performance and new capabilities. However, there is a limit to how much you can scale a model before running into computational and memory bottlenecks.

Every transformer block used in LLMs has attention layers and feedforward (FFW) layers. The attention layer computes the relations between the sequence of tokens fed to the transformer block. The feedforward network is responsible for storing the models knowledge. FFW layers account for two-thirds of the models parameters and are one of the bottlenecks of scaling transformers. In the classic transformer architecture, all the parameters of the FFW are used in inference, which makes their computational footprint directly proportional to their size.

MoE tries to address this challenge by replacing the FFW with sparsely activated expert modules instead of a single dense FFW layer. Each of the experts contains a fraction of the parameters of the full dense layer and specializes in certain areas. The MoE has a router that assigns each input to several experts who are likely to provide the most accurate answer.

By increasing the number of experts, MoE can increase the capacity of the LLM without increasing the computational cost of running it.

According to recent studies, the optimal number of experts for an MoE model is related to several factors, including the number of training tokens and the compute budget. When these variables are balanced, MoEs have consistently outperformed dense models for the same amount of compute resources.

Furthermore, researchers have found that increasing the granularity of an MoE model, which refers to the number of experts, can lead to performance gains, especially when accompanied by an increase in model size and training data.

High-granularity MoE can also enable models to learn new knowledge more efficiently. Some studies suggest that by adding new experts and regularizing them properly, MoE models can adapt to continuous data streams, which can help language models deal with continuously changing data in their deployment environments.

Current approaches to MoE are limited and unscalable. For example, they usually have fixed routers that are designed for a specific number of experts and need to be readjusted when new experts are added.

DeepMinds Parameter Efficient Expert Retrieval (PEER) architecture addresses the challenges of scaling MoE to millions of experts. PEER replaces the fixed router with a learned index to efficiently route input data to a vast pool of experts. For each given input, PEER first uses a fast initial computation to create a shortlist of potential candidates before choosing and activating the top experts. This mechanism enables the MoE to handle a very large number of experts without slowing down.

Unlike previous MoE architectures, where experts were often as large as the FFW layers they replaced, PEER uses tiny experts with a single neuron in the hidden layer. This design enables the model to share hidden neurons among experts, improving knowledge transfer and parameter efficiency. To compensate for the small size of the experts, PEER uses a multi-head retrieval approach, similar to the multi-head attention mechanism used in transformer models.

A PEER layer can be added to an existing transformer model or used to replace an FFW layer. PEER is also related to parameter-efficient fine-tuning (PEFT) techniques. In PEFT techniques, parameter efficiency refers to the number of parameters that are modified to fine-tune a model for a new task. In PEER, parameter efficiency reduces the number of active parameters in the MoE layer, which directly affects computation and activation memory consumption during pre-training and inference.

According to the paper, PEER could potentially be adapted to select PEFT adapters at runtime, making it possible to dynamically add new knowledge and features to LLMs.

PEER might be used in DeepMinds Gemini 1.5 models, which according to the Google blog uses a new Mixture-of-Experts (MoE) architecture.

The researchers evaluated the performance of PEER on different benchmarks, comparing it against transformer models with dense feedforward layers and other MoE architectures. Their experiments show that PEER models achieve a better performance-compute tradeoff, reaching lower perplexity scores with the same computational budget as their counterparts.

The researchers also found that increasing the number of experts in a PEER model leads to further perplexity reduction.

This design demonstrates a superior compute-performance trade-off in our experiments, positioning it as a competitive alternative to dense FFW layers for scaling foundation models, the researchers write.

The findings are interesting because they challenge the long-held belief that MoE models reach peak efficiency with a limited number of experts. PEER shows that by applying the right retrieval and routing mechanisms, it is possible to scale MoE to millions of experts. This approach can help further reduce the cost and complexity of training and serving very large language models.

VB Daily

Stay in the know! Get the latest news in your inbox daily

By subscribing, you agree to VentureBeat's Terms of Service.

Thanks for subscribing. Check out more VB newsletters here.

An error occured.

See the rest here:
DeepMinds PEER scales language models with millions of tiny experts - VentureBeat

Read More..

Google’s AI robots are learning from watching movies just like the rest of us – TechRadar

Google DeepMind's robotics team is teaching robots to learn how a human intern would: by watching a video. The team has published a new paper demonstrating how Google's RT-2 robots embedded with the Gemini 1.5 Pro generative AI model can absorb information from videos to learn how to get around and even carry out requests at their destination.

Thanks to the Gemini 1.5 Pro model's long context window, training a robot like a new intern is possible. This window allows the AI to process extensive amounts of information simultaneously. The researchers would film a video tour of a designated area, such as a home or office. Then, the robot would watch the video and learn about the environment.

The details in the video tours let the robot complete tasks based on its learned knowledge, using both verbal and image outputs. It's an impressive way of showing how robots might interact with their environment in ways reminiscent of human behavior. You can see how it works in the video below, as well as examples of different tasks the robot might carry out.

Those demonstrations aren't rare flukes, either. In practical tests, Gemini-powered robots operated within a 9,000-square-foot area and successfully followed over 50 different user instructions with a 90 percent success rate. This high level of accuracy opens up many potential real-world uses for AI-powered robots, helping out at home with chores or at work with menial or even more complex tasks.

That's because one of the more notable aspects of the Gemini 1.5 Pro model is its ability to complete multi-step tasks. DeepMind's research has found that the robots can work out how to answer questions like whether there's a specific drink available by navigating to a refrigerator, visually processing what's within, and then returning and answering the question.

The idea of planning and carrying out the entire sequence of actions demonstrates a level of understanding and execution that goes beyond the current standard of single-step orders for most robots.

Don't expect to see this robot for sale any time soon, though. For one thing, it takes up to 30 seconds to process each instruction, which is way slower than just doing something yourself in most cases. The chaos of real-world homes and offices will be much harder for a robot to navigate than a controlled environment, no matter how advanced the AI model is.

Sign up for breaking news, reviews, opinion, top tech deals, and more.

Still, integrating AI models like Gemini 1.5 Pro into robotics is part of a larger leap forward in the field. Robots equipped with models like Gemini or its rivals could transform healthcare, shipping, and even janitorial duties.

See the original post here:
Google's AI robots are learning from watching movies just like the rest of us - TechRadar

Read More..

Google DeepMind Is Integrating Gemini 1.5 Pro in Robots That Can Navigate Real-World Environments – Gadgets 360

Google DeepMind shared new advancements made in the field of robotics and vision language models (VLMs) on Thursday. The artificial intelligence (AI) research division of the tech giant has been working with advanced vision models to develop new capabilities in robots. In a new study, DeepMind highlighted that using Gemini 1.5 Pro and its long context window has now enabled the division to make breakthroughs in navigation and real-world understanding of its robots. Earlier this year, Nvidia also unveiled new AI technology that powers advanced capabilities in humanoid robots.

In a post on X (formerly known as Twitter), Google DeepMind revealed that it has been training its robots using Gemini 1.5 Pro's 2 million token context window. Context windows can be understood as the window of knowledge visible to an AI model, using which it processes tangential information around the queried topic.

For instance, if a user asks an AI model about most popular ice cream flavours, the AI model will check the keyword ice cream and flavours to find information to that question. If this information window is too small, then the AI will only be able to respond with the names of different ice cream flavours. However, if it is larger, the AI will also be able to see the number of articles about each ice cream flavour to find which has been mentioned the most and deduce the popularity factor.

DeepMind is taking advantage of this long context window to train its robots in real-world environments. The division aims to see if the robot can remember the details of an environment and assist users when asked about the environment with contextual or vague terms. In a video shared on Instagram, the AI division showcased that a robot was able to guide a user to a whiteboard when he asked it for a place where he could draw.

Powered with 1.5 Pro's 1 million token context length, our robots can use human instructions, video tours, and common sense reasoning to successfully find their way around a space, Google DeepMind stated in a post.

In a study published on arXiv (a non-peer-reviewed online journal), DeepMind explained the technology behind the breakthrough. In addition to Gemini, it is also using its own Robotic Transformer 2 (RT-2) model. It is a vision-language-action (VLA) model that learns from both web and robotics data. It utilises computer vision to process real-world environments and use that information to create datasets. This dataset can later be processed by the generative AI to break down contextual commands and produce desired outcomes.

At present, Google DeepMind is using this architecture to train its robots on a broad category known as Multimodal Instruction Navigation (MIN) which includes environment exploration and instruction-guided navigation. If the demonstration shared by the division is legitimate, this technology might further advance robotics.

Go here to read the rest:
Google DeepMind Is Integrating Gemini 1.5 Pro in Robots That Can Navigate Real-World Environments - Gadgets 360

Read More..

Embracing Decentralized Compute: The Future of Cloud Computing – Grit Daily

We all know about the cloud and use it regularly, and in 2024, its become a necessity for most people. However, beyond our immediate use, many of us dont consider the larger picture servers, storage, databases, networking, software, analytics, and intelligence or where that collected data about us lives.

Cloud computing is the backbone of many industries, driving everything from small startups to global enterprises. Until recently, traditional, centralized cloud providers have dominated the market, but new decentralized methods are emerging and subtly becoming the new darling of the space.

The following analysis aims to inform those who rely on these powerful resources daily about the hazards of centralized cloud computing and discuss the benefits of its opposer, decentralized cloud compute, which addresses the inherent limitations of centralized systems by providing equitable and safe access to all.

The dictionary defines democracy as a system of government by the whole population or all the eligible members of a state, typically through elected representatives. We all know the word, and we generally know what it means. However, reading the actual definition highlights the defining feature of democracy very clearly: By the whole population.

In the ideal democratic society, every member participates, helps make decisions, and therefore benefits. Decentralized compute power distributes that by the whole population mentality to its users.

By leveraging a network of globally distributed data centers, decentralized compute providers ensure that power and data are not concentrated in a few geographic locations or controlled by a handful of large corporations. Sticking to the metaphor, those centralized locations and corporations running things would be akin to a monarchy.

Rather, the decentralized model ensures that resources are spread across multiple data center locations, owned by myriad groups, and this works regardless of geographic location or financial status. Full decentralization will be achieved when people are able to provide their idle devices for spare computing. However, at the moment, this distributed model with various data centers across the world is as close to decentralization as possible.

By removing the barriers imposed by centralized control, decentralized compute cloud suppliers give smaller companies and startups the same high-performance computing resources that larger enterprises enjoy, leveling the playing field and promoting inclusivity.

One of the significant advantages of decentralized compute power is its intrinsic scalability and flexibility. Unlike centralized systems, which rely on a few massive data centers, decentralized networks distribute workloads across numerous smaller nodes. This distribution allows for more efficient use of resources, as workloads can be dynamically allocated to underutilized nodes, enhancing overall system performance.

Furthermore, decentralized systems can dynamically allocate computing tasks based on real-time demand, which means resources are used more efficiently. Conversely, centralized data centers often run at full capacity, with minimum usage requirements regardless of actual demand, leading to higher energy use.

Decentralized cloud computing is modern societys answer to massive environmental issues brought on by big centralized companies. Fortified by blockchain, it represents the future of a fair and secure digital infrastructure one that does much less harm to our planet by properly taking advantage of and sharing underutilized digital resources, says Matt Hawkins, Founder of CUDOS, a blockchain network that combines cloud and blockchain technologies to provide global decentralized computing power for users and developers.

The integration of blockchain technology in decentralized compute networks ensures that information and resources are not monopolized by a single entity. Blockchains built-in decentralized nature provides enhanced security, transparency, and resilience, making it a reliable method for these networks.

Decentralized compute networks align perfectly with the spirit of Web3, which aims to create a more open and equitable internet, by ensuring that no single entity can corner computing resources or information. This alignment enhances security and transparency, promoting innovation by furnishing a collaborative and inclusive environment for developers and users alike.

Additionally, decentralized compute companies often accept cryptocurrency payments, enhancing the user experience by providing flexible payment options. This feature is particularly beneficial for users in regions with limited access to traditional banking services, as it allows them to access computing resources without relying on conventional financial institutions.

Cryptocurrency transactions can be more cost-effective than traditional payment methods, reducing overhead costs for both providers and users. This cost efficiency translates into lower prices for computing resources, making high-performance computing more accessible to a broader audience.

Large enterprises often assume that centralized cloud providers offer the most scalability and reliability. However, switching to decentralized compute networks provides additional benefits, such as enhanced security through blockchain integration and lower costs due to more efficient resource utilization.

For small companies and startups, decentralized compute networks offer a cost-effective and scalable solution that can grow with their needs. By providing access to high-performance computing resources without the need for significant upfront investment, decentralized networks enable these smaller entities to compete on a more level playing field with larger corporations.

Decentralized compute power represents a significant shift in how computing resources are accessed and utilized. By offering scalable, flexible, and secure solutions, decentralized networks provide equitable access to high-performance computing, aligning with the decentralized ethos of Web3.

Whether you run a large enterprise or a small startup, exploring decentralized compute networks provides significant benefits, enhancing your operational efficiency and company innovation.

Spencer Hulse is the Editorial Director at Grit Daily. He is responsible for overseeing other editors and writers, day-to-day operations, and covering breaking news.

Go here to see the original:

Embracing Decentralized Compute: The Future of Cloud Computing - Grit Daily

Read More..

Bringing blockchain to governments and enterprises, as explained by BCGs Tibor Mrey – CoinGeek

At the London Blockchain Conference this year, Tibor Mrey delivered a keynote address on enterprise blockchain adoption. CoinGeek Backstage caught up with him on the sidelines of the conference to discuss blockchains progress within big businesses and governments.

Mrey is a managing director and partner at the Boston Consulting Group (BCG), the worlds second-largest management consultancy.At BCG, he leads the firms Web3, IoT, and extended reality divisions.

For the longest time, blockchain has been pushed as a hammer looking for nails. Those times are over, Mrey told CoinGeek Backstages Kurt Wuckert Jr.

Mrey believes that for blockchain to break through in the enterprise world, developers must focus solely on the challenges they seek to solve, not the sideshows.

Satoshi launched Bitcoin as peer-to-peer electronic cash to fix the broken model relying on a few intermediaries. Mrey believes that this vision is still alive today as the challenges that plagued the world in 2008 are still prevalent today.

However, blockchain has emerged as an even bigger opportunity, and enterprises are warming up to Web3, he told CoinGeek Backstage.

Mrey also delved into the debate on whether there will ultimately be one chain to rule them all. Today, thousands of chains exist, with most being ghost towns that process a handful of transactions.

I dont think there will be space for 5,000 blockchains, Mrey says.

He believes that most of the chains without utility will either collapse or consolidate in the future. However, he warned against blockchain tribalism and trying to pull each other down.

Its really about convincing businesses around the true value you can unlock.

Decentralization, misconceptions, and learning from Facebook

Blockchain is rife with misconceptions, some pushed by people who dont know better and others by people with agendas. Many have impeded the adoption of technology, especially at the enterprise level.

As a management consultant, Mrey faces hundreds of top-level executives annually who believe these misconceptions and has embarked on a campaign to educate key decision-makers about blockchain.

He told CoinGeek Backstage that he has found the best approach is to avoid the sideshows and go straight to where the value is.

Working with governments and big enterprises, the issue of energy consumption often crops up. BTCs sky-high energy consumption has led many to assume that proof-of-work as a consensus mechanism is fundamentally flawed, and Ethereums migration only cemented this misled belief.

However, BSV blockchain has proven that proof-of-work can be efficient. With its unlimited block sizes, it processed over a billion transactions last year. The upcoming Teranode upgrade will further enhance these capabilities, pushing the network to over a million transactions per second.

This year marked 20 years of Facebook (NASDAQ: META), and in those two decades, the social media revolution Mark Zuckerberg sparked has brought massive benefits to billions of people. However, it has also come at a cost, including the loss of privacy, rising depression, and addiction.

As blockchain embarks on the start of its journey towards becoming a household technology, are there lessons we can learn from previous tech cycles?

Mrey believes that one such lesson is that going fast isnt always the best way. He also noted that in addition to technical concerns, users must pose philosophical questions about any new technology, starting with blockchain.

Watch: Teranode & the Web3 world with edge-to-edge electronic value system

New to blockchain? Check out CoinGeeks Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.

View post:

Bringing blockchain to governments and enterprises, as explained by BCGs Tibor Mrey - CoinGeek

Read More..

Ubit Coin: Setting the Standard for Decentralized Cryptocurrencies – TechGraph

In the dynamic realm of cryptocurrencies, Ubit Coin shines as a beacon of true decentralization. With a supply of 990 million coins and integration into 11-12 ecosystems, Ubit Coin is swiftly ascending in prominence.

The pivotal aspect of Ubit Coin is its decentralized ownership structure. Ownership of Ubit Coin has been transferred to a null address, specifically 0x0000000000000000000000000000000000000000, marking it as a fully decentralized digital asset.

- Advertisement -

This significant move to a null address highlights Ubit Coins dedication to decentralization. No single entity or individual has the authority to edit the code, modify transaction fees, block addresses, or rescue tokens. The immutability of the Ubit Coin code ensures that the rules are consistent and transparent.

Transaction fees remain stable and predictable, and the inability to block addresses guarantees that transactions are conducted freely and fairly. The trustless nature of the system, where users are solely responsible for their assets, is emphasized by the inability to rescue UBIT20 tokens.

The transfer of ownership to a null address eliminates the risks associated with centralized control, ensuring that Ubit Coin remains in the hands of its users. This decentralization significantly enhances security, making Ubit Coin less vulnerable to hacks and breaches. Users can rely on the security of their transactions and the protection of their assets.

Ubit Coins community-driven governance fosters a democratic approach, with decisions about the coins future made collectively, considering the interests of all stakeholders.

- Advertisement -

Ubit Coins code, set in stone, provides reliability and predictability, ensuring that the governing principles do not change arbitrarily. As adoption across various platforms continues, Ubit Coins value and utility are poised to grow. The decentralized model guarantees that this growth is organic and community-driven.

Ubit Coin is leading the charge in the decentralized revolution within the cryptocurrency space. By transferring ownership to a null address, it truly embodies the principles of decentralization.

- Advertisement -

This move enhances security and trust, paving the way for sustained growth and community-driven development. Ubit Coins fully decentralized nature sets it apart as a model digital currency: secure, reliable, and user-centric.

- Advertisement -

Read the original:

Ubit Coin: Setting the Standard for Decentralized Cryptocurrencies - TechGraph

Read More..

SHADOWS OF THE NIGHT (Crowd Funding) – Horror Society

SHADOWS OF THE NIGHT Unhinged Paranormal Horror starring Bill Oberst Jr. (3 From Hell, Criminal Minds) and Debra Lamb (Point Break, Robocop) campaign is currently LIVE on Indiegogo.

Other cast members include Lili Davis, Alexandria Allerheiligen, Brii Frank, Peter Goldthwaite, Kazuo Salazar, Alyssa Ramirez, and Michael Oliver.

Award winning filmmaker, Todd Braley, Shadows of the Night is going to be that movie that scares the absolute holy sh*t out of you. I promise.

Imagine a world where ancient evil meets modern technology, where the dark corners of the internet unleash horrors that defy explanation. In Shadows of the Night, a young priest discovers a malevolent spirit possibly spawned from the depths of the Dark Web. As he seeks help from the church, he encounters Bishop Dwyer, whose motives may not be as righteous as they first appear. Amidst this darkness, psychic grandmother Rose emerges, her own past hauntingly intertwined with the unfolding terror.

Indiegogo Campaign: https://www.indiegogo.com/projects/shadows-of-the-night-unhinged-paranormal-horror#/

Continued here:
SHADOWS OF THE NIGHT (Crowd Funding) - Horror Society

Read More..

AWS Graviton4 Benchmarks Prove To Deliver The Best ARM Cloud Server Performance – Phoronix

Show Your Support: This site is primarily supported by advertisements. Ads are what have allowed this site to be maintained on a daily basis for the past 20+ years. We do our best to ensure only clean, relevant ads are shown, when any nasty ads are detected, we work to remove them ASAP. If you would like to view the site without ads while still supporting our work, please consider our ad-free Phoronix Premium.

This week AWS announced that Graviton4 went into GA with the new R8G instances after Amazon originally announced their Graviton4 ARM64 server processors last year as built atop Arm Neoverse-V2 cores. I eagerly fired up some benchmarks myself and I was surprised by the generational uplift compared to Graviton3. At the same vCPU counts, the new Graviton4 cores are roughly matching Intel Sapphire Rapids performance while being able to tango with the AMD EPYC "Genoa" and consistently showing terrific generational uplift.

Graviton4 reached general availability this week with initially powering the new R8g instances. Graviton4-based R8g instances are promoted as offering up to 30% better performance than the Graviton3-based R7g prior generation instances. Graviton3 CPUs sported 64 x Neoverse-V1 cores while Graviton4 has 96 x Neoverse-V2 cores based on the Armv9.0 ISA. The Neoverse-V2 cores with the Graviton4 have 2MB of L2 cache per core, twelve channel DDR5-5600 memory, and other improvements over prior Graviton ARM64 processors.

AWS promotes Graviton4 as offering up to 30% faster performance within web applications, 40% faster performance for databases, and 40%+ greater performance for Java software.

Being curious about the Graviton4 performance, I fired up some fresh AWS instances to compare the R8g instance to other same-sized instances. The "16xlarge" size was used across all testing for looking at 64 vCPUs each time and 512GB of memory per instance. The instances tested for today's article included:

Graviton2 - r6g.16xlarge Graviton3 - r7g.16xlarge Graviton4 - r8g.16xlarge AMD EPYC 9R14 - r7a.16xlarge Intel Xeon 8488C - r7i.16xlarge

All instances were tested using Ubuntu 24.04 with the Linux 6.8 kernel and stock GCC 13.2 compiler.

It would have been interesting to compere to Ampere Computing's cloud ARM64 server processors but not really feasible, unfortunately. With Ampere Altra (Max) in the cloud like with Google's T2A Tau instances, only up to 48 vCPUs are available. And even then Ampere Altra is making use of DDR4 memory and making use of Neoverse-N1 cores... AmpereOne of course is the more direct competitor albeit still not to be found. We still don't have our hands on any AmpereOne hardware nor any indications from Ampere Computing when they may finally send out review samples. Oracle Cloud was supposed to be GA by now with their AmpereOne cloud instances but those remain unavailable as of writing and Ampere Computing hasn't been able to provide any other access to Ampere One for performance testing. Thus it's still MIA for what may be the closest ARM64 server processor competitor to Graviton4.

Let's see how Graviton4 looks -- and its performance per dollar in the AWS cloud -- compared to prior Graviton instances and the AMD EPYC and Intel Xeon competition. The performance per dollar values were based on the on-demand hourly rates.

Page 1 - Introduction Page 2 - HPC Benchmarks Page 3 - Crypto Benchmarks, srsRAN + More Page 4 - Code Compilation + 7-Zip Page 5 - Ray-Tracing, Digital Signal Processing, OpenSSL Page 6 - Database Workloads - ClickHouse, PostgreSQL, RocksDB Page 7 - Blender + Conclusion

Read the original:
AWS Graviton4 Benchmarks Prove To Deliver The Best ARM Cloud Server Performance - Phoronix

Read More..