Page 608«..1020..607608609610..620630..»

DNA storage finally reaches mainstream (well, sort of) but it will cost you a whopping $1000 per KB (yes, kilobyte) – TechRadar

DNA storage is finally within reach, with the promise of being able to store billions of terabytes of data in a tiny volume, for hundreds of years and at a minimal cost. In a tantalizing breakthrough, French firm Biomemory has rolled out credit-card-sized storage devices that can store one kilobyte of data each on DNA.

For $1,000, you get two identical Biomemory cards; yes, they are very expensive - the first commercially available hard drive (IBM 350-1) retailed for $34,500 for 3.75MB - and slow; but this is primarily a proof of concept. The DNA on the card is dried; it has to be rehydrated and read by a sequencing machine (Biomemory has partnered with a third party to get it done).

DNA storage has been in the realm of science (and a bit of fiction) for decades but the past two years have seen a flurry of announcements (from giants like Microsoft and Seagate to smaller players like Catalog or Biomemory) that have made it one of the most likely candidates to help quench the global thirst for data storage.

Storing bytes in DNA could pave the way for near unlimited data storage capacity at extremely low acquisition cost, with market-leading operational costs (both to store and retrieve data) and a shelf-life counted in multiple decades.

For now thought, theres some serious hurdles that will need to be cleared out before DNA can make its way up the storage food change as the goto replacement for tape, then hard drives. Other than price, theres the speed at which data is stored and retrieved (currently in the low MB/s) and the still relatively unproven nature of the medium.

Its been almost a decade since Seagate told us that the world would be running out of storage capacity by 2016 (spoiler alert: it never happened) but generative AI and machine learning will probably help us reach that crunch point much faster than expected.

Biomemory wants to launch a 100PB enterprise-focused DNA card by 2026, which is just a few years away. It took hard drives about 60 years to grow in size by a factor of 1,000,000 to 4TB.

The French startup wants to smash through this, achieving a growth factor of 100 trillion (thats one followed by 14 zeros) in under 36 months. If they managed to achieve that, it will be one of the biggest technological feats of all times and will become one of the hottest startups ever.

Continued here:
DNA storage finally reaches mainstream (well, sort of) but it will cost you a whopping $1000 per KB (yes, kilobyte) - TechRadar

Read More..

What if there was AirBnb for the Cloud? – Capacity Media

80% of disk drives worldwide are only 20% full, says Ben Gloub, CEO of innovative cloud provider Storj.

This fact is the very reason Storj exists.

Storj taps into this existing storage capacity from tens of thousands of nodes worldwide, eliminating the need to build new drives and data centres, he tells Capacity.

And alongside the obvious sustainability benefits this provides, it allows Storj to offer cloud storage for a tenth of the price of legacy providers. Golub says that the carbons emissions saved by using this model can reach up to 83%.

Storing data securely across tens of thousands of nodes worldwide eliminates a single point of failure, and because of parallelism, reduces congestion. This enables Storj to deliver upload and download speeds that are consistently high compared to players like AWS, Golub claims. The distributed model is also able to meet requirements that cloud users desire, such as speed, reliability, security, global accessibility and low cost.

However, industries that handle highly sensitive data like healthcare and finance need to meet certain compliance requirements that limit where data can be stored, Golub says.

This has inspired Storj to launch Storj Select, a new enterprise feature that delivers customisation on which of the thousands of nodes specific data is stored. With Storj Select, customer data is only stored on points of presence which meet customer specified qualifications.

Cloudwave, a provider of healthcare data security solutions, was revealed yesterday as one of Storj Selects first customers.

The more than 280 hospitals that we work with across six countries are focused on protecting patients. Our role is to protect their patient data from cyber attacks, CloudWave said.

Storj Select's uniquely customisable compliance features, speed and security is a key component of our cloud strategy.

The idea for Storj Select came directly as a result of feedback from enterprises who were interested in switching to its cloud storage platform, but couldnt because they needed to meet specific compliance requirements.

The new feature allows Storj to store data only on the storage nodes that meet those qualifications, such as SOC2, a a voluntary compliance standard that governs how data should be protected.

Storj Select opens the door for us to serve more customers like CloudWave which is now a Storj customer because of the existence of Storj Select, Golub says.

As a disruptive startup, we have to be agile to meet our customers needs, he continues. There are several aspects of our business model and architecture that allow us to bring this type of service online very quickly.

Firstly, because Storj use existing capacity thats already spun, powered and cooled in data centres, it doesnt face the same supply chain challenges of other providers when it comes to sourcing the physical hardware to bring a multi-petabyte storage service online.

Storj were also able to leverage globally distributed facilities that already had SOC2 certifications in place, further reducing the time to market.

It was a coordinated effort to build the product capabilities, source the supply, and onboard our first enterprise customer, but 90% of what we needed was already in place, Golub says.

Golub also says that the product capabilities Storj built enables it to be much more dynamic in how it selects and restricts the nodes that will store a particular data set.

This enabled us to bring a tier of storage online that stores data only in SOC2 Type2 data centres in the US very rapidly,

How does it work?

Instead of storing data on a few centralised data centres that are vulnerable to outages and cyber attacks, Storj encrypts and splits data into small pieces including redundancy that are then stored across its global network, Golub explains.

When a download request is made, only a small subset of the fastest data segments are needed to reconstitute the file. This distributed architecture is designed to protect against outages, ransomware, and data compromise.

Storj Select takes this one step further and enhances data security by ensuring that customer data is only stored on nodes that meet the specific compliance requirements of that customer.

The distribution of data on the network is critical to performance, durability and security of data stored on the network, Golub continues.

All of the processes for selecting nodes for storing a particular object, distributing the encrypted pieces of erasure encoded data to nodes, auditing the nodes and repairing data is fully automated.

Those processes are highly scalable, both horizontally by adding more nodes and resources, as well as vertically by scaling up the resources of a particular node, he explains.

The Storj Select service does require an initial manual process to set placement rules or geofenced regions and choose where the data in particular buckets or projects will be stored .

But once a placement rule is created and applied, everything after that is fully automated.

That automation ensures we achieve our eleven 9s of durability with billions of objects stored on the Storj network in tens of billions of pieces distributed over 10s of thousands of nodes in over 100 countries, Golub says.

Whats next for Storj?

Were continuing to innovate and adapt Storj to better meet the needs of our growing customer base. Additionally, we have some exciting new integrations, customers, partnerships and sustainability initiatives coming down the pipeline that well be sure to keep you apprised of! Golub concludes.

See the original post here:
What if there was AirBnb for the Cloud? - Capacity Media

Read More..

Amazon S3 storage picks up speed at re:Invent 2023 – TechTarget

The glut of generative AI products at AWS can only work if customers are sitting on a trove of usable data, leading the public cloud provider to introduce new storage services it sees as vital to AI training.

At AWS re:Invent 2023 this week, AWS expanded its cloud storage offerings with almost a dozen new object, file and block services.

File storage in the AWS cloud gained the majority of these services, including additions to the AWS native version of NetApp's data platform, OnTap. A few colder storage tiers for block and file rounded out the additions, further indicating the vendor's big data play, according to industry analysts.

But it was the additions to its object storage service that took center stage, with AWS CEO Adam Selipsky introducing a new Amazon S3 object storage tier during his opening keynote Tuesday. The new addition brings a higher-performance tier to the traditionally slower storage format.

Selipsky joked that storage "surely can't be an area that's ready for reinvention," but he went on to state that generative AI will need performance gains at every part of the tech stack.

Overall, the additions are incremental improvements to the AWS storage portfolio, said Dave Raffo, an analyst at Futurum Group. Compared with the vendor's headliner products such as Amazon Q, a generative AI assistant and a response to Microsoft's Copilot, these storage offerings aren't going to make any IT administrators transition to AWS.

Instead, the additions are meant to keep existing customers in AWS and help them further grow their data footprint. Supporting massive amounts of customer data to feed AI development is the current marketing play for storage vendors, Raffo said, and that message could once again change in several months.

"The announcements look incremental and are adding features to those [storage services]," he said.

Selipsky personally introduced Amazon S3 Express One Zone during his opening keynote.

Selipsky and AWS materials said the service will provide "single-digit millisecond data access" for customer applications using object storage but will be limited to operating in a single availability zone. The service mostly benefits the processing of large amounts of smaller objects, according to the vendor, which can lead to reduced runtime costs for other processing services.

AWS said use cases include AI/machine learning training, media processing and high-performance computing. The service starts at $0.016 per GB, compared with the $0.023 or $0.022 per GB for S3 Standard tiering.

Analyst briefings on the service indicated the Express One Zone uses flash storage, rather than the traditional hard drives commonly used by object storage services, Raffo said.

Amazon S3, a common standard of object storage across cloud services, wasn't used with databases or other applications demanding high performance in the past due to its slower access speeds, said Scott Sinclair, an analyst at TechTarget's Enterprise Strategy Group.

Object storage was considered useful for less frequently accessed data or repository storage. Now, through the advent of cheaper flash memory, enterprises are seeing object storage as a way to maximize available storage and collect massive data sets for AI/ML development, Sinclair said.

Many cloud customers have already built up significant reserves of data ripe for AI training and are unlikely to pay the costs of moving it out, he said.

"Data has gravity," Sinclair said. "Once you get 2 to 4 petabytes in the clouds, it's not going anywhere."

It isn't small, fast storage or big, slow storage [anymore]. Bottom line, everything needs to be fast. Scott SinclairAnalyst, Enterprise Strategy Group

Effective AI pipelines, at least those looking to maintain some speed for a business, are now looking to become all flash, he added.

"People need flash all throughout their data pipelines," Sinclair said. "It isn't small, fast storage or big, slow storage [anymore]. Bottom line: Everything needs to be fast."

Increased speed for cloud storage still isn't the cutting edge of performance AWS may tout it as, said Marc Staimer, president of Dragon Slayer Consulting.

Object storage isn't the best fit for databases or high-performance computing, which make use of block storage, and on-premises or colocated services can provide sub-millisecond access speeds, he said. Faster object storage can be a benefit to a generative AI learning development, such as expediting retrieval-augmented generation, a learning reinforcement technique.

"This is not a wow," Staimer said. "I'd call it high-performance object storage. If you really want performance, you're not using S3. You'll be using different storage."

About half of the storage product and capabilities debuting at re:Invent were related to file storage services offered by AWS, including Amazon Elastic File System (EFS) and Amazon FSx for OpenZFS.

Amazon EFS now offers replication failback, enabling replication of changes from backup to primary files systems, and an increase in total IOPS for the service. Amazon FSx for OpenZFS now has the ability to send snapshots from one file storage system in a customer's AWS account to another.

File storage customers in AWS have a new option for colder storage with Amazon EFS Archive tier. The service targets data that customers expect to access about once a quarter and is priced at $0.008 GB per month, compared with the Standard S3 tier at $0.30 or Infrequent Access S3 at $0.016.

For Elastic Block Storage (EBS) customers, the AWS Backup service adds additional snapshot management capabilities with Snapshots Archive. EBS snapshots with retention periods of at least 90 days can be set to automatically move into the colder Snapshots Archive as part of the snapshot's lifecycle.

The AWS Backup service now offers automatic restore test and validation as well, enabling customers to frequently test their recovery process in case of disaster.

Amazon FSx for NetApp OnTap, an AWS native file service built on NetApp's technology, also gained new additions including a scale-out file system capability, multiple availability zones for virtual private clouds, and storage management for FlexGroup volumes with the AWS console.

The new OnTap scale-out file system capabilities increase the total storage NetApp's platform can manage within AWS, namely with the addition of scale-out file systems. An OnTap system can now handle up to 1 pebibyte of data, offering a little more space than the traditional petabyte, compared with the 192 tebibytes of a scale-up system.

Other improvements include increased throughput for reads and writes, as well as IOPS increases. The option is limited to a single availability zone, compared with the multiple zones of scale-up systems.

AWS has showcased NetApp as a file system partner in the past few years, as creating and managing file storage systems are challenging, Sinclair said. Considering NetApp's storage legacy, and its business with other public clouds, Sinclair said he believes AWS benefits far more from courting NetApp than vice versa.

"Building a good enterprise file system is incredibly difficult," Sinclair said. "When you're looking at third-party storage options in AWS, having it with an AWS interface is a differentiation."

Tim McCarthy is a journalist living in the North Shore of Massachusetts. He covers cloud and data storage news.

Read more:
Amazon S3 storage picks up speed at re:Invent 2023 - TechTarget

Read More..

HPE goes all-in for AI with new hybrid cloud compute, storage products – Network World

At its annual HPE Discover event in Barcelona Thursday, Hewlett Packard Enterprise (HPE) unveiled a range of new and enhanced hybrid cloud products -- including storage, development tools, and rack-scale hardware co-engineered with Nvidia -- aimed at easing the development and deployment of generative AI systems for enterprises.

"We think the next decade is going to require full stack thinking from hardware all the way up through the software layers, as organizations lean into the deployment of AI applications," Evan Sparks, HPE's vice president and general manager for AI Solutions and Supercomputing Cloud told a news conference. "These applications are extremely computationally intensive. They require data as a first-class input and lots of lots of data. And our view is that this new architecture will require more thinking in terms of scale-up processing; they need to do workloads that are hybrid by design."

The announcement came a day after HPE announced its fourth-quarter earnings in which revenue and profit were down on the whole and down for traditional categories of storage and computing but up for high-performance computing, AI, and intelligent edge. While overall revenue declined 7% year over year to $7.4 billion, and traditional compute sales for servers was down 31% to $2.6 billion, intelligent edge revenue jumped 41% to $1.4 billion and HPC and AI products were up 37% to $1.2 billion.

HPE GreenLake for File Storage, an all-flash data platform, is adapting to better support growing and evolving large-scale AI workloads, including those based on generative AI and large language models (LLMs). The platform has been upgraded for AI model training and tuning. The improvements include increased performance density, integration with Nvidias new Quantum-2 InfiniBand for better GPU-focused computing, and significantly boosted throughput and capacity.

HPE previously said it is launching a cloud-based service powered by its supercomputers to support advanced AI applications. Initially focusing on LLMs, HPE plans to extend this service to specialized AI applications in areas including climate modeling, healthcare, finance, manufacturing, and transportation. Part of the HPE GreenLake portfolio, these services will be distinct in being exclusively cloud-based, unlike other GreenLake offerings that are on-premises or hybrid.

HPE and Nvidia are collaborating to develop a computing hardware and software platform specifically for generative AI tuning and inferencing -- the process where an AI model makes predictions and decisions on news data -- in enterprise data centers. The idea is to make it easier for businesses, regardless of their size, to adapt and use AI. This new offering is designed to be user-friendly, allowing companies to customize AI models with private data and deploy them from local devices to cloud-based systems.

The hardware component of the platform is based on a rack-scale architecture, where computing components are managed at the level of an entire rack of servers as a single entity to optimize resources. It includes HPE ProLiant Compute DL380a servers incorporating Nvidia L40S GPUs and Nvidia BlueField-3 DPUs (data processing units).

HPE's Machine Learning Development Environment Software has been updated to a managed service as a cloud-based solution for AI model training. The service, also included in its new AI tuning platform, is intended to help businesses quickly and safely start or advance their generative AI projects. It simplifies the training process of AI models, supposedly making it faster to develop them, and its designed to adapt to future needs, reducing the strain on management and processing resources. Additionally, the software includes new features for generative AI to enable quick testing and prototyping of models.

The new AI platform also includes HPE Ezmeral Software, which has been updated to make it easier and quicker for businesses to handle data, analytics, and AI tasks. The software platform, which allows users to run cloud-native or non-cloud-native applications in containers, is designed to work smoothly across various cloud environments. Key updates include a more efficient hybrid data lakehouse, which is now better optimized for GPU and CPU usage, facilitating easier data management and analysis. Also, integration with HPE Machine Learning Development Environment Software is meant to improve model training and tuning. The software also features better GPU allocation management, enhancing performance across different workloads. Moreover, Ezmeral now supports more third-party tools, such as Whylogs for monitoring models and Voltron Data for faster, GPU-accelerated data queries.

Customers will be able to order the new AI platform and the various updated components in the first quarter of 2024.

HPE also announced a GreenLake Flex Solution for Digital Twin built on Nvidia OVX-certified HPE ProLiant Gen11 servers with Nvidia L40S GPUs. The solution is meant to allow users to create, simulate, and optimize virtual assets and processes. Its a scalable, multi-GPU framework that provides the necessary infrastructure, software, and services to leverage the benefits of industrial digitalization. The digital twin features AI-accelerated infrastructure and Nvidia Omniverse Enterprise, designed to enable rapid insights from all available data. The company says the solution combines a public clouds flexibility with a private clouds security.

A digital twin is like a virtual model of a real-life object or system. This technology, which started with individual items, now covers bigger things like buildings, factories, and whole cities. Some people even think it can be used for people and the way things are done, making the idea of digital twins much broader.

View post:
HPE goes all-in for AI with new hybrid cloud compute, storage products - Network World

Read More..

8 Best Indoor Security Cameras (2023): For Homes and Apartments – WIRED

This feature-packed camera from Eufy is likely overkill for most folks, but its an impressively versatile device. It combines a regular camera with a 130-degree field of view that can go up to 4K with a 2K telephoto lens that provides 3x optical zoom. It also has 360-degree pan and 75-degree tilt controls, AI tracking that works well, and support for up to four preset positions it can patrol through, including the default it returns to after tracking a subject out of frame. Theres also privacy mode, two-way audio, and onboard AI to detect people, pets, and sounds. You can record locally with a microSD card up to 128 GB (not included), hook it up to a HomeBase 3 (sold separately), or subscribe for cloud storage from $3 per month.

The footage is crisp and detailed enough to zoom in on, though bright areas like sun streaming in a window can appear blown out. Because the frame rate is 15, sometimes fast-moving subjects appear blurry. The motion detection is reliable, and you can set up privacy zones in the app. Notifications are swift and come with a thumbnail if you dont mind uploading to the cloud (it is optional). I sometimes noticed a slight lag on the live feed, and sound quality could be better. There is no HomeKit support, but you get Alexa and Google Assistant, though the camera was very slow and sometimes failed to load for me via Google.

Note: After a security researcher identified cloud uploads from a local only device and a report warned of video streaming without encryption following a bug in May 2021 that exposed some camera feeds to other Eufy users, we stopped recommending the brand. After initial denials, parent company Anker acknowledged and fixed the issues, overhauled its policies, and instituted a bug bounty program. We spoke with third-party security researcher Ralph Echemendia, who was hired to conduct an audit, and have decided to start testing Eufy cameras again.

Read the rest here:
8 Best Indoor Security Cameras (2023): For Homes and Apartments - WIRED

Read More..

Get to know the different storage options for your irreplaceable digital images – Digital Camera World

As photographers, having a safe place to back up our images is essential, especially if clients are dependent on our material.

Create your own archive

Every photographer works with their own archive system this could be chronologically ordered or structured according to different themes or projects. As long as it makes sense to you and the material can be accessed quickly, there is no right or wrong system. Remember to import photos regularly to back them up and save time with your organisation.

External hard drives are a solution for photographers who travel a lot and need access to their images with or without an internet connection. In general, keep an eye on these three key features: speed, storage size and safety.

Apart from backing up essential data, organizing files is also important for many professional photographers. Modern camera technology allows us to produce huge amounts of digital files, however, even if they are safely stored, these images are of little use if we do not save them to an archive with a proper filing system. After all, we dont want our valuable work and precious memories to be hidden away in folders where they cant be shared with the world.

This works with a local cloud storage system, customised to your projects needs. NAS systems arent usually equipped with hard disks inside, so you will need to back up your files. The advantage here is that it works through a WLAN connection, meaning that everyone who uses this specific network can access files via a password. For example, the WD My Cloud EX2 Ultra NAS drive is an excellent system, which is even equipped with a dual bay for RAID setups.

A RAID drive is one of the safest storage services around. It operates with two or more hard disks that work like mirrors and create an exact copy of the data. Systems are available in different expansion levels, including the TerraMaster D5 Thunderbolt 3, which works with five disks. That means you can make more than one copy of your files so there isnt the same risk of losing material through technical issues or failure.

A great solution when it comes to not only saving current projects but also your whole photography archive, a desktop drive can often be more affordable than other solutions. It also operates offline, which ensures a high level of security as no additional third party can access your files. The WD My Book Duo, for example, is an excellent solution it operates offline, has an impressive storage of up to 24TB and features two bays for RAID support.

Photographers often lead busy lives, travelling around and requiring access to their images wherever they are. One simple and practical solution is a portable SSD their size, weight and robust housing mean you can take them with you wherever you go. However, most devices operate with only one hard disk, so you may need an additional backup device. The LaCie Rugged RAID Pro is an advanced solution that automatically provides a backup but is more expensive than other SSDs.

There are lots of cloud-based storage services competing to offer you a place to back up your files. The advantage here is that you can access files from any device, wherever you are and you can easily share projects with clients or the rest of the team. Google Drive operates with the same security protocol as other Google services. One downside of this storage system is that the rate at which data uploads depends on your internet speed and so uploading large files can often take a long time.

Original post:
Get to know the different storage options for your irreplaceable digital images - Digital Camera World

Read More..

AWS Launches High-Speed Amazon S3 Express One Zone – Datanami

(Elnur/Shutterstock)

Amazon Web Services (AWS) is a major player in the public cloud space, however, with several other emerging competitors, AWS needs to continue evolving and innovating to continue enjoying the largest slice of the cloud pie.

In its annual re:Invent conference in Las Vegas, AWS announced the general availability of the Amazon S3 Express One Zone a new storage class to deliver better performance and speed compared to the standard edition. Amazon claims the Express One Zone offers 10x performance improvement over the standard S3 service and is capable of handling hundreds of thousands of requests per second with consistent single-digit latency.

The new Amazon S3 Express One Zone is tailor-made for data-intensive applications, such as AI/ML training, high-performance computing, and financial modeling. With its extremely low latency, the upgrade works well for objects of any size, including very large amounts of small objects.

According to AWS, the Express One Zone offers a 50 percent reduction in data request costs and a 60 percent reduction in computer expenses compared to the standard S3 storage class. This means that Spot and On-Demand compute resources are now used more efficiently and can be shut down earlier, helping reduce the overall processing costs. Deprovisioning instances earlier helps lower the associated compute costs

Launched in 2006, The Amazon Simple Storage Service (S3) is one of the most popular cloud storage services with millions of customers around the globe. Over the years, the S3 has gone through several upgrades. It evolved from humble internet storage for personal storage to the backbone of the modern data stack of many data-driven enterprises.

With the new Amazon S3 Express One Zone, you get a new bucket type and new authentication model optimized for AI/ML use cases and other data-intensive workloads. The new bucket type is specific to this storage class and can support hundreds of requests per second. It has a hierarchical namespace and storage object key names in a directory-like manner to deliver outstanding efficiency.

Traditionally, Amazon S3 users have been able to choose specific AWS regions to store S3 data. With the new storage class, users can also choose to co-locate One Zone data in the same availability zone as compute resources.

Millions of customers rely on Amazon S3 for everything from low-cost archival storage to petabyte-scale data lakes, and they want to expand their use to support their most performance-intensive applications where every millisecond counts, said James Kirschner, general manager of Amazon S3 at AWS.

Amazon S3 Express One Zone delivers the fastest data access speed for the most latency-sensitive applications and enables customers to make millions of requests per minute for their highly accessed datasets, while also reducing request and compute costs.

In 2021, AWS launched the S3 Object Lambda and that same year added the S3 Glacier Instance Retrieval function to its platform. The launch of the Amazon S3 Express One Zone marks the most significant update to the Amazon S3 in recent years. It also signals the continued longevity and popularity of the Amazon S3. Cloudera, Colorfront, ChaosSearch, and Pinterest are among the customers and partners that already using Amazon S3 Express One Zone

Related Items

AWS Announces Powerful New Offerings to Accelerate Generative AI Innovation

Rubrik Helps Customers Further Protect Amazon S3 Data

AWS Unleashes the DataZone

See original here:
AWS Launches High-Speed Amazon S3 Express One Zone - Datanami

Read More..

Unlocking neutron star rotation anomalies: Insights from quantum simulation – Phys.org

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

proofread

close

A collaboration between quantum physicists and astrophysicists, led by Francesca Ferlaino and Massimo Mannarelli, has achieved a significant breakthrough in understanding neutron star glitches. They were able to numerically simulate this enigmatic cosmic phenomenon with ultracold dipolar atoms. This research, now published in Physical Review Letters, establishes a strong link between quantum mechanics and astrophysics and paves the way for quantum simulation of stellar objects from Earth.

Neutron stars have fascinated and puzzled scientists since the first detected signature in 1967. Known for their periodic flashes of light and rapid rotation, neutron stars are among the densest objects in the universe, with a mass comparable to that of the sun but compressed into a sphere only about 20 kilometers in diameter.

These stellar objects exhibit a peculiar behavior known as a "glitch," where the star suddenly speeds up its spin. This phenomenon suggests that neutron stars might be partly superfluid. In a superfluid, rotation is characterized by numerous tiny vortices, each carrying a fraction of angular momentum. A glitch occurs when these vortices escape from the star's inner crust to its solid outer crust, thereby increasing the star's rotational speed.

The key ingredient for this study lies in the concept of a "supersolid"a state that exhibits both crystalline and superfluid propertieswhich is predicted to be a necessary ingredient of neutron star glitches. Quantized vortices nest within the supersolid until they collectively escape and are consequently absorbed by the outer crust of the star, accelerating its rotation. Recently, the supersolid phase has been realized in experiments with ultracold dipolar atoms, providing a unique opportunity to simulate the conditions within a neutron star.

The study by researchers at the University of Innsbruck and the Austrian Academy of Sciences as well as the Laboratori Nazionali del Gran Sasso and the Gran Sasso Science Institute in Italy demonstrates that glitches can occur in ultracold supersolids, serving as versatile analogs for the inside of neutron stars. This groundbreaking approach allows for a detailed exploration of the glitch mechanism, including its dependence on the quality of the supersolid.

"Our research establishes a strong link between quantum mechanics and astrophysics and provides a new perspective on the inner nature of neutron stars," says first author Elena Poli. Glitches provide valuable insights into the internal structure and dynamics of neutron stars. By studying these events, scientists can learn more about the properties of matter under extreme conditions.

"This research shows a new approach to gain insights into the behavior of neutron stars and opens new avenues for the quantum simulation of stellar objects from low-energy Earth laboratories," says Francesca Ferlaino.

More information: Elena Poli et al, Glitches in Rotating Supersolids, Physical Review Letters (2023). DOI: 10.1103/PhysRevLett.131.223401

Journal information: Physical Review Letters

Link:

Unlocking neutron star rotation anomalies: Insights from quantum simulation - Phys.org

Read More..

Physicists May Have Found a Hard Limit on The Performance of Large Quantum Computers – ScienceAlert

A newly discovered trade-off in the way time-keeping devices operate on a fundamental level could set a hard limit on the performance of large-scale quantum computers, according to researchers from the Vienna University of Technology.

While the issue isn't exactly pressing, our ability to grow systems based on quantum operations from backroom prototypes into practical number-crunching behemoths will depend on how well we can reliably dissect the days into ever finer portions. This is a feat the researchers say will become increasingly more challenging.

Whether you're counting the seconds with whispers of Mississippi or dividing them up with the pendulum-swing of an electron in atomic confinement, the measure of time is bound by the limits of physics itself.

One of these limits involves the resolution with which time can be split. Measures of any event shorter than 5.39 x 10-44 seconds, for example, run afoul of theories on the basic functions of the Universe. They just don't make any sense, in other words.

Yet even before we get to that hard line in the sands of time, physicists think there is a toll to be paid that could prevent us from continuing to measure ever smaller units.

Sooner or later, every clock winds down. The pendulum slows, the battery dies, the atomic laser needs resetting. This isn't merely an engineering challenge the march of time itself is a feature of the Universe's progress from a highly ordered state to an entangled, chaotic mess in what is known as entropy.

"Time measurement always has to do with entropy," says senior author Marcus Huber, a systems engineer who leads a research group in the intersection of Quantum Information and Quantum Thermodynamics at the Vienna University of Technology.

In their recently published theorem, Huber and his team lay out the logic that connects entropy as a thermodynamic phenomenon with resolution, demonstrating that unless you've got infinite energy at your fingertips, your fast-ticking clock will eventually run into precision problems.

Or as the study's first author, theoretical physicist Florian Meier puts it, "That means: Either the clock works quickly or it works precisely both are not possible at the same time."

This might not be a major problem if you want to count out seconds that won't deviate over the lifetime of our Universe. But for technologies like quantum computing, which rely on the temperamental nature of particles hovering on the edge of existence, timing is everything.

This isn't a big problem when the number of particles is small. As they increase in number, the risk any one of them could be knocked out of their quantum critical state rises, leaving less and less time to carry out the necessary computations.

Plenty of research has gone into exploring the potential for errors in quantum technology caused by a noisy, imperfect Universe. This appears to be the first time researchers have looked at the physics of timekeeping itself as a potential obstacle.

"Currently, the accuracy of quantum computers is still limited by other factors, for example the precision of the components used or electromagnetic fields," says Huber.

"But our calculations also show that today we are not far from the regime in which the fundamental limits of time measurement play the decisive role."

It's likely other advances in quantum computing will improve stability, reduce errors, and 'buy time' for scaled-up devices to operate in optimal ways. But whether entropy will have the final say on just how powerful quantum computers can get, only time will tell.

This research was published in Physical Review Letters.

Read the rest here:

Physicists May Have Found a Hard Limit on The Performance of Large Quantum Computers - ScienceAlert

Read More..

‘Wobbly spacetime’ is latest stab at unifying physics – The Register

Since the early 20th century, physicists have struggled to marry theories governing the very big with those for the very small.

Despite the staggering achievements in modern science, the conflict between Einstein's general theory of relativity and quantum mechanics has become a stumbling block in developing a consistent, reliable theory explaining everything.

University College London professor Jonathan Oppenheim proposes to overcome the barrier with the idea of "wobbly spacetime."

Earlier efforts to unify the two main columns of modern physics had gone with the idea that gravity the nature of which is reliably explained by General Relativity should somehow be quantized. That means divided into discrete blocks of magnitude rather than distributed along a continuum that can always be subdivided. Two prominent exponents of the idea are string theory and loop quantum gravity.

But the professor of quantum theory argues that making quantum theory fit relativity would be more fruitful.

In a paper published this week in the journal Physical Review X, Oppenheim proposes to retain the classical nature of gravity but allow for the probabilistic nature of quantum mechanics by inserting certain unpredictable but continuous "wobbles" into spacetime itself.

His approach relies on two separate statistical approaches for the quantum and classical aspects of a system. "In the statistical description of the quantum side, states are described using density operators that evolve as if the system were open that is, susceptible to uncontrolled influences from the environment," an accompanying article explains.

"In the statistical description of the classical side, states are probability distributions on phase space a framework that is often used to model large numbers of particles, where one does not know the individual position and momentum of each particle."

"The rate at which time flows is changing randomly and fluctuating in time," Oppenheim told The Guardian. "It's quite mathematical. Picturing it in your head is quite difficult."

Another paper published in Nature Communications, written by Oppenheim's colleague, PhD student Zach Weller-Davies, proposes approaches to verify or disprove the theory experimentally.

"We have shown that if spacetime doesn't have a quantum nature, then there must be random fluctuations in the curvature of spacetime, which have a particular signature that can be verified experimentally," he told website Physics.org.

"If spacetime is classical, the fluctuations have to be larger than a certain scale, and this scale can be determined by another experiment where we test how long we can put a heavy atom in superposition of being in two different locations."

But this being theoretical physics, not everyone is convinced. Loop theory proponent Carlo Rovelli, an Italian theoretical physicist, told The Guardian: "I think it is good that Oppenheim explores this possibility, even if not very plausible, but big claims about a 'New theory unites Einstein's gravity with quantum mechanics' sounds a bit overblown to me."

Rovelli has signed a 5,000-to-one bet with Oppenheim against the theory being proven correct. So much for unity in physics.

See the original post:

'Wobbly spacetime' is latest stab at unifying physics - The Register

Read More..