Page 636«..1020..635636637638..650660..»

Five things to look for at AWS re:Invent 2023 – SiliconANGLE News

Thanksgiving has passed in the U.S., so its on to the December holiday season but before we can deck the halls and jingle some bells, an important event is coming up:AWS re:Invent 2023.

This upcoming week, an estimated 60,000-plus people will descend upon Las Vegas to check out the latest and greatest from the cloud computing leader. One of the things I like most about the show is that Amazon Web Services Inc. has done a nice job of evolving the event from being a vendor show to more of an industry show for all things cloud computing.

Most businesses seem to have shaken off the COVID cobwebs and have been full steam ahead with many emerging technologies. Because of this, Ive been looking forward to this years show for some time. Here are the top five things Ill look for at re:Invent:

This might be the most obvious thing to look for this year, as its hard to go to any event without being hit over the head with generative AI. Specifically, with AWS, one would assume that generative AI will be sprinkled throughout the keynotes as part of AWS vision for the industry.

However, I am expecting AWS to talk more than vision. The company always does a great job of parading customers out onstage during the keynotes to provide specific examples of how technology can transform a business. Im hoping to see some in this area. Also, one of the core principles of AWS has always been to drive innovation to make things easier for its customers, and generative AI can do that an embedded technology.

Some people say that AWS is behind on generative AI, which contradicts what Ive seen in my customer conversations. I do think its fair to say that AWS is behind on generative AI marketing, but the company has never been one to have its marketing outpace innovation.

For decades, the industry relied primarily on off-the-shelf silicon to power infrastructure, but over the past several years, more companies have built their own.Ive talked to AWS about why its investing heavily in silicon, and executives told me building custom chips allows it to operate faster, at a lower cost, while providing better performance.

There is also a power benefit, which is critical as sustainability has become such a big driver of modernization. AlthoughAWS has used re:Invent to launch the latest and greatest services, its most established and largest volume services are infrastructure and silicon, which help customers save money with better performance.

Most people think of AWS as a company that provides building blocks for applications, but it also offers several of its own software-as-a-service applications. A few years ago, AWS released Connect, a cloud-native, AI-based contact center solution. Last year, the company took the covers off AWS Supply Chain, which, as one would expect, also uses cloud-native and AI as its differentiator.

Although the applications are certainly newsworthy, whats more interesting is how AWS rolls out applications. Both Supply Chain and Connect were built off internally facing services. As an example, Amazon has one of the biggest contact centers in the world, with, I believe, somewhere in the range of 70,000 agents, so once it became customer-facing, it was already fully baked.

The same can be said for Supply Chain. Amazon has so many homegrown applications logistics, human resources, inventory management and so on that the challenge is prioritizing which ones to roll out. We should expect to see more of applications at every re:Invent. This gives customers of all sizes access to applications that may have been out of their reach with a pay-per-use model.

5G has been hyped for over five years, but its yet to have the societal impact many people predicted.I believe we are on the precipice of seeing 5G finally have the transformative effect we have all been waiting for, and AWS is playing a key role in the infrastructure and showcasing the use cases.

At the Mobile World Congress, AWS created its NextLevel Experience area earlier this year, demonstrating 5G-enabled services. The organization has a massive 5G ecosystem, and Im looking forward to checking out the expo hall, where plenty of 5G should be on display.

The catalog of AWS services is massive, and many customers like to put the building blocks together. Developers use the services, assemble the components and build something. This works for many organizations but only for some, as it can create a lot of heavy lifting and skills to tie the services together.

Over the past few years, AWS has done a better job of pre-integrating the products to simplify deployment with predictable results. At re:Invent 2022, AWS announced the integration of its relational database, Aurora, and Redshift for cloud data warehousing. Customers rarely buy one without the other, so putting them together makes things easier for AWS customers. Over the next week, we should expect more solutions that create a better together story.

Given the massive hype around generative AI, I expect anything related to this to steal the show this year. However, theres more to the show than that, and I would encourage anyone attending the show to investigate the areas I have outlined as a good starting point for AWS innovation.

Zeus Kerravalais a principal analyst at ZK Research, a division of Kerravala Consulting. He wrote this for SiliconANGLE

TheCUBEis an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate thecontent you create as well Andy Jassy

THANK YOU

See the original post here:
Five things to look for at AWS re:Invent 2023 - SiliconANGLE News

Read More..

What You Need to Know About Hybrid Cloud Computing – What You … – InformationWeek

A hybrid cloud is a mixed computing environment that allows applications to run with the support of computing, storage, and services in multiple environments, including public and private clouds, on-site data centers, and even edge locations. Hybrid cloud computing continues to gain momentum, since the number of organizations relying entirely on a single public cloud is now dwindling rapidly.

In a hybrid cloud environment, organizations have the flexibility to run certain services or store data on their own servers (on-premises or in a private cloud) while also leveraging the resources and scalability of the public cloud, says Mayank Jindal, an Amazon software development engineer in an email interview. The approach allows organizations to meet specific security, compliance, or performance requirements, as well as providing the ability to seamlessly move workloads between different environments based on their evolving needs.

A hybrid cloud allows adopters to optimize and tailor their infrastructure/resources for business needs, performance, cost, security, and governance, explains Alok Shankar, engineering manager and technical lead for Oracle Cloud Migrations via email. It also allows adoption at a pace that is comfortable for an organization.

Related:3 Ways to Maximize Cloud Investments

Shankar notes that an organization may start their hybrid cloud journey quite modestly by replacing disk storage with object storage and slowly migrate other components as needed. Usually, applications needing high security or low latency can be kept on-premise while others needing elasticity or rapid scaling can be migrated to the public cloud.

A hybrid clouds flexibility can be extremely useful, Shankar says. In most cases, there are cost and ROI implications that can save millions of dollars, he states. If an organization migrates components to the cloud, it can save the expense of adding extra machines to its data center, which may be needed only temporarily. Cloud elasticity is on-demand, cheaper, and you only pay for what you use when compared to on-premise solutions.

By choosing exactly where data lives, a hybrid cloud can also help an organization meet regulatory and compliance requirements. For example, storing data in a European country due to GDPR regulations will be faster in the cloud if you dont have a data center in that particular geography, Shankar explains.

Meanwhile, security can be enhanced by maintaining tighter control over sensitive information. One can keep sensitive components on-premises and use the cloud for less critical data and applications, Shankar notes.

Related:How to Minimize Multi-Cloud Complexity

In an email interview, Bernie Hoecker, a partner and enterprise cloud transformation leader with technology research and advisory firm ISG, reports that benefits can be narrowed to four specific attributes.

Cost savings. Hybrid cloud adopters can avoid capital expenditures by leveraging the public cloud and running the applications in a SaaS model. This strategy also avoids the need for ongoing hardware maintenance.

Scalability. Public cloud models can scale up or down during usage spikes. Pay as you go models also provide the opportunity to help balance cost and revenue.

Flexibility. The hybrid cloud allows adopters to select the most suitable environments for specific workloads. An example would be applications that are required to run in a private cloud for compliance or regulatory statutes.

Performance. Adopters can select the cloud environment that best serves their end users needs. Client demands differ by industry and persona. A hybrid cloud model offers multiple avenues to satisfy client demand.

A hybrid cloud strategy should be considered only after careful evaluation, Shankar says. The first step should be a careful assessment of the organizations needs and requirements, he suggests. If it seems like a good fit, you can start building a hybrid cloud strategy.

Related:The Case of Climbing Cloud Costs Optimizing Hybrid IT Strategy

When considering hybrid cloud adoption, its important to understand that a successful implementation often involves ongoing monitoring, evaluation, and adjustments, Jindal says. Technology evolves, and an organizations needs may change over time, so regular assessments of cost-effectiveness and security measures are crucial, he warns. Staying adaptable and informed about the latest developments in the hybrid cloud space can lead to better outcomes and long-term success.

The biggest misconception about hybrid cloud computing is that its overly complex and challenging to manage, Jindal says. In reality, with proper planning, the right tools, and expertise, hybrid cloud environments can be effectively managed without excessive complexity. The key, he notes, is to carefully design and implement the hybrid cloud strategy to align with the organizations objectives and requirements.

Another mistaken belief, Hoecker says, is that hybrid clouds are only for large enterprises. Hybrid clouds are used by large and small firms and, in many cases, start-ups leverage this approach. He also dismisses the misconception that hybrid clouds arent reliable. This is not true, Hoecker states. Hybrid cloud providers provide high availability offerings and uptime SLAs.

More here:
What You Need to Know About Hybrid Cloud Computing - What You ... - InformationWeek

Read More..

Research reveals rare metal could offer revolutionary switch for future quantum devices – Phys.org

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

proofread

close

Quantum scientists have discovered a rare phenomenon that could hold the key to creating a 'perfect switch' in quantum devices which flips between being an insulator and a superconductor.

The research, led by the University of Bristol and published in Science, found these two opposing electronic states exist within purple bronze, a unique one-dimensional metal composed of individual conducting chains of atoms.

Tiny changes in the material, for instance, prompted by a small stimulus like heat or light, may trigger an instant transition from an insulating state with zero conductivity to a superconductor with unlimited conductivity, and vice versa. This polarized versatility, known as "emergent symmetry," has the potential to offer an ideal On/Off switch in future quantum technology developments.

Lead author Nigel Hussey, Professor of Physics at the University of Bristol, said, "It's a really exciting discovery which could provide a perfect switch for quantum devices of tomorrow.

"The remarkable journey started 13 years ago in my lab when two Ph.D. students, Xiaofeng Xu and Nick Wakeham, measured the magnetoresistancethe change in resistance caused by a magnetic fieldof purple bronze."

In the absence of a magnetic field, the resistance of purple bronze was highly dependent on the direction in which the electrical current was introduced. Its temperature dependence was also rather complicated. Around room temperature, the resistance is metallic, but as the temperature is lowered, this reverses and the material appears to be turning into an insulator. Then, at the lowest temperatures, the resistance plummets again as it transitions into a superconductor.

Despite this complexity, surprisingly, the magnetoresistance was found to be extremely simple. It was essentially the same irrespective of the direction in which the current or field was aligned and followed a perfect linear temperature dependence all the way from room temperature down to the superconducting transition temperature.

"Finding no coherent explanation for this puzzling behavior, the data lay dormant and published unpublished for the next seven years. A hiatus like this is unusual in quantum research, though the reason for it was not a lack of statistics," Prof Hussey explained.

"Such simplicity in the magnetic response invariably belies a complex origin and as it turns out, its possible resolution would only come about through a chance encounter."

In 2017, Prof Hussey was working at Radboud University and saw advertised a seminar by physicist Dr. Piotr Chudzinski on the subject of purple bronze. At the time few researchers were devoting an entire seminar to this little-known material, so his interest was piqued.

Prof Hussey said, "In the seminar Chudzinski proposed that the resistive upturn may be caused by interference between the conduction electrons and elusive, composite particles known as dark excitons. We chatted after the seminar and together proposed an experiment to test his theory. Our subsequent measurements essentially confirmed it."

Buoyed by this success, Prof Hussey resurrected Xu and Wakeham's magnetoresistance data and showed them to Dr. Chudzinski. The two central features of the datathe linearity with temperature and the independence of the orientation of current and fieldintrigued Chudzinski, as did the fact that the material itself could exhibit both insulating and superconducting behavior depending on how the material was grown.

Dr. Chudzinski wondered whether rather than transforming completely into an insulator, the interaction between the charge carriers and the excitons he'd introduced earlier could cause the former to gravitate towards the boundary between the insulating and superconducting states as the temperature is lowered. At the boundary itself, the probability of the system being an insulator or a superconductor is essentially the same.

Prof Hussey said, "Such physical symmetry is an unusual state of affairs and to develop such symmetry in a metal as the temperature is lowered, hence the term 'emergent symmetry," would constitute a world-first."

Physicists are well versed in the phenomenon of symmetry breaking: lowering the symmetry of an electron system upon cooling. The complex arrangement of water molecules in an ice crystal is an example of such broken symmetry. But the converse is an extremely rare, if not unique, occurrence. Returning to the water/ice analogy, it is as though upon cooling the ice further, the complexity of the ice crystals 'melts' once again into something as symmetric and smooth as the water droplet.

Dr. Chudzinski, now a research fellow at Queen's University Belfast, said, "Imagine a magic trick where a dull, distorted figure transforms into a beautiful, perfectly symmetric sphere. This is, in a nutshell, the essence of emergent symmetry. The figure in question is our material, purple bronze, while our magician is nature itself."

To further test whether the theory held water, an additional 100 individual crystals, some insulating and others superconducting, were investigated by another Ph.D. student, Maarten Berben, working at Radboud University.

Prof Hussey added, "After Maarten's Herculean effort, the story was complete and the reason why different crystals exhibited such wildly different ground states became apparent. Looking ahead, it might be possible to exploit this 'edginess' to create switches in quantum circuits whereby tiny stimuli induce profound, orders-of-magnitude changes in the switch resistance."

More information: P. Chudzinski et al, Emergent symmetry in a low-dimensional superconductor on the edge of Mottness, Science (2023). DOI: 10.1126/science.abp8948

Journal information: Science

Link:

Research reveals rare metal could offer revolutionary switch for future quantum devices - Phys.org

Read More..

Researchers use quantum computing to predict gene relationships – Phys.org

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

proofread

close

In a new multidisciplinary study, researchers at Texas A&M University showed how quantum computinga new kind of computing that can process additional types of datacan assist with genetic research and used it to discover new links between genes that scientists were previously unable to detect.

Their project used the new computing technology to map gene regulatory networks (GRNs), which provide information about how genes can cause each other to activate or deactivate.

As the team published in npj Quantum Information, quantum computing will help scientists more accurately predict relationships between genes, which could have huge implications for both animal and human medicine.

"The GRN is like a map that tells us how genes affect each other," Cai said. "For example, if one gene switches on or off, then it may change another gene that could change three, or five, or 20 more genes down the line."

"Because our quantum computing GRNs are constructed in ways that allow us to capture more complex relationships between genes than traditional computing, we found some links between genes that people hadn't known about previously," he said. "Some researchers who specialize in the type of cells we studied read our paper and realized that our predictions using quantum computing fit their expectations better than the traditional model."

The ability to know which genes will affect other genes is crucial for scientists looking for ways to stop harmful cellular processes or promote helpful ones.

"If you can predict gene expression through the GRN and understand how those changes translate to the state of the cells, you might be able to control certain outcomes," Cai said. "For example, changing how one gene is expressed could end up inhibiting the growth of cancer cells."

With quantum computing, Cai and his team are overcoming the limitations of older computing technologies used to map GRNs.

"Prior to using quantum computing, the algorithms could only handle comparing two genes at a time," Cai said.

Cai explained that only comparing genes in pairs could result in misleading conclusions, since genes may operate in more complex relationships. For example, if gene A activates and so does gene B, it doesn't always mean that gene A is responsible for gene B's change. In fact, it could be gene C changing both genes.

"With traditional computing, data is processed in bits, which only have two stateson and off, or 1 and 0," Cai said. "But with quantum computing, you can have a state called the superposition that's both on and off simultaneously. That gives us a new kind of bitthe quantum bit, or qubit.

"Because of superposition, I can simulate both the active and inactive states for a gene in the GRN, as well as this single gene's impact on other genes," he said. "You end up with a more complete picture of how genes influence each other."

While Cai and his team have worked hard to show that quantum computing is helpful to the biomedical field, there's still a lot of work to be done.

"It's a very new field," Cai said. "Most people working in quantum computing have a physics background. And people on the biology side don't usually understand how quantum computing works. You really have to be able to understand both sides."

That's why the research team includes both biomedical scientists and engineers like Cai's Ph.D. student Cristhian Roman Vicharra, who is a key member of the research team and spearheaded the study behind the recent publication.

"In the future, we plan to compare the healthy cells to ones with diseases or mutations," Cai said. "We hope to see how a mutation might affect genes' states, expression, frequencies, etc."

For now, it's important to get as clear an understanding as possible of how healthy cells work before comparing them to mutated or diseased cells.

"The first step was to predict this baseline model and see whether the network we mapped made sense," Cai said. "Now, we can keep going from there."

More information: Cristhian Roman-Vicharra et al, Quantum gene regulatory networks, npj Quantum Information (2023). DOI: 10.1038/s41534-023-00740-6

Journal information: npj Quantum Information

Original post:

Researchers use quantum computing to predict gene relationships - Phys.org

Read More..

Bitcoin Smart Contracts and Apps: Do They Even Exist? – Ledger

Nov 23, 2023 | Updated Nov 23, 2023

Script is a purposefully non-Turing complete language and this makes smart contracts on Bitcoin limited in their functionality.

Layer-2 blockchains and solutions are present for Bitcoin to improve its smart contract offerings while keeping its security intact.

Bitcoin is the worlds first blockchain and cryptocurrency. Its primary use case is peer-to-peer payment transfers. Often Bitcoin is referred to as digital gold, with it offering a secure and decentralized way to store and manage value. This core value proposition is what led Bitcoin to become the most popular and widely used crypto network so far. And its these same core values that many other networks have taken inspiration from while creating new use cases and innovations.

To explain, while Bitcoin was created purely to store and manage value in a decentralized manner, other networks expanded into other territories, creating ways to host decentralized applications. Decentralized apps (DApps) have taken the world by storm, and as such, there has been a push to create these types of apps that work using the most popular network, Bitcoin. However, this was not the reason Bitcoin was created, thus its a bit more complex than many smart-contract-ready networks like Ethereum.

So how do Bitcoins versions of smart contracts work? And what kinds of functions are available using this method?

Lets explore the concept of Bitcoin smart contracts and dive into the different types of smart contracts on the Bitcoin network.

Yes, the Bitcoin blockchain supports smart contracts. Developers can use Script, a scripting language, to write and deploy smart contracts on Bitcoin.

So, why is Bitcoin never the first thing people think when the topic of smart contracts pops up?

Often, youll hear people say Bitcoin is not Turing complete. But in fact, thats not quite true. Instead, its all to do with something called opcodes. Or more, Bitcoins lack of them.

Opcodes are essentially small pieces of code that represent functions that can be executed on a specific network. On some blockchains, for example, Ethereum, some of these opcodes have the power to read and write the current state of the blockchain. Then from there, you can combine many of these opcodes to create complex automated tasks triggered by a blockchain transaction. These groups of opcodes are better known as smart contracts and are responsible for countless blockchain apps and platforms.

However, Bitcoin script doesnt contain these types of opcodes. That also means that Bitcoin doesnt have a recorded current state at any given moment. Instead, it simply records who owns what, and allows people to send and receive coins via UTXOs, with a limited choice of conditions of how that value transfer executes.

So, now you know why Bitcoin smart contracts arent exactly what you might expect, so lets see how they work.

Bitcoin smart contracts, like their counterparts on other networks, are simply pieces of code that automatically execute when some predefined conditions are met. So, you can imagine, they work in a similar way.

Firstly, its important to know that Bitcoin operates using its own computing language: Script. And script uses something like a lock and key system to execute smart contracts. The sender sets a condition or rule in order for the transaction to be processed acting as the lock. To follow, the recipient must provide a matching key also a piece of code in Scriptthat fulfills the condition set by the sender.

Now, what kind of conditions can be set?

Lets find the answer to this by exploring the various types of smart contracts present on Bitcoin.

This is the most common type of transaction. The sender addresses Bitcoin to the receivers public keys hash. To access the funds, the receiver must prove they own the corresponding private key.

This is like locking a box and giving the key to one person.

MultiSig requires more than one signature for a transaction to be valid. An example is a 2-of-3 MultiSig, which needs at least two signatures from a group of three to execute a transaction. This adds an extra layer of security and is often used by businesses.

This is like a safe that needs two or more keys turned at the same time.

This allows a transaction to be created at any given time, but only be valid at a specified future date or block height. Its a way to postdate a Bitcoin transaction and is useful for various financial agreements. Its a sort of timelock that pertains to transactions.

Time-locked transactions were made possible with BIP-65 and BIP-112s introduction of new opcodes concerning the nLockTime and nSequence fields.

Essentially, these opcodes allow an entry to specify the earliest time it can be added to a blockstopping the transaction from completing until a certain number of blocks, or amount of time has passed. In short, these time-locked opcodes are important, as without them the entire Lightning network could not exist!

Instead of locking outputs to a public key, P2SH locks them to a scripts hash. The spender needs to provide the script matching the hash and satisfy its conditions. This enables complex scripts without burdening the sender with their details. P2SH came into play with the BIP-16 upgrade.

For instance, you challenge a friend with a puzzle. If they solve it, they get the bitcoins.

P2TR is a privacy-preserving complex script that allows multiple parties to create a signature that looks like a single one, enhancing privacy and efficiency.

It is a combination of P2PKH and P2SH with a lot more privacy. This was part of the BIP-341 proposal, popularly known as the Taproot upgrade.

Miniscript was introduced to make the complex spending conditions (that Bitcoin was already capable of) much easier for developers to use and implement. In short, its a simple coding language that facilitates a number of functionalities. Essentially it allows your Bitcoin wallet to handle more advanced actions like requiring multiple keys for an account or having a wallet with some spending conditions that are only active after some amount of time.

While this isnt strictly a smart contract, this is one of the more flexible custody schemes miniscript offers.

Now that you know about some advanced functions, lets dig into something more tangible.

What about NFTs on Bitcoin do they even exist in the first place?

The Ordinals protocol launched on Bitcoin in January 2023, enabling Ordinal NFTs by attaching data to individual Satoshis. Each Satoshi gets a unique number through an intuitive ordering system.

Ordinal NFTs reside fully on the Bitcoin blockchain without the need for a separate token. However, that means, inscribed Satoshis now compete for block space on the network, resulting in a spike in network fees.

BRC-20 a token standard for Bitcoin Ordinals uses JSON data to facilitate various token functions. At present, the BRC-20 token standard offers three primary functions:

The BRC-20 token standard is relatively new and young. Hence, the functionalities are limited and not entirely user-friendly.

Bitcoin L2s or layer-2 solutions can help Bitcoin overcome this limitation.

Bitcoin smart contracts have limited functionalities. However, layer-2 blockchains on Bitcoin allow developers to code more complicated smart contracts.

To better understand how they work, lets look at a few examples of Bitcoin layer 2 chains.

The Lightning Network (LN) is a layer 2 scaling solution for Bitcoin, designed to facilitate fast and low-cost transactions by conducting most of the transactions off-chain. It operates using payment channels which are like private off-chain tunnels between users that facilitate payments.

Apart from instant payments and low fees, LN enables the creation of more complex smart contracts like Hashed Time-Locked Contracts (HTLCs) within Lightning Apps or LApps. These contracts are programmable, most often implemented for functions such as micropayments, instant swaps, and streaming payments.

The Lightning network is also home to Discreet Log Contracts (DLCs). To clarify these contracts allow two parties to engage in a bet, using a connection to an oracle to verify real-life events. While the oracle is crucial for the settlement of these bets, it is not directly involved in the transactions that create and settle the contract. These are directly negotiated among the parties making the bet.

Stacks is a chain that works alongside Bitcoin. It enables developers to build smart contracts and decentralized applications (dApps) on top of Bitcoin.

They employ Proof of Transfer (PoX) an approach to allowing the Stacks blockchain to process its transactions while leveraging Bitcoins security.

Smart Contracts on Stacks

Using Stacks, developers can create a wide range of applications, from decentralized finance (DeFi) platforms to non-fungible tokens (NFTs) and decentralized social networks.

It acts as a more diverse and expansive ecosystem while maintaining the robustness and security of Bitcoin as the base layer.

Two applications of Stacks-based smart contracts on Bitcoin are:

Hiro Wallet: It is a non-custodial wallet enabling secure connections and transactions within the Stacks ecosystem. Bitcoin users can participate in Stacking, a reward system that distributes BTC to users for supporting the network and locking STX tokens for a specified period.

Bitcoin Naming Service (BNS): BNS, a decentralized name system for Bitcoin, similar to Ethereums ENS, saw a recent surge in registrations. To own a BTC name, users have to interact with the smart BNS contract on the Stacks chain.

Understanding how Bitcoin smart contracts work is imperative to participate safely in crypto and DeFi. Its even better when a user can understand smart contract code to assess if a smart contract is safe to interact with.

And as with using smart contracts on any blockchain, you should prioritize the security of your private keys. The best way to do that is to use Bitcoin applications through a hardware wallet like Ledger.

Ledger hardware wallets keep private keys offline at all times and also enable clear signing. This helps users clearly read the transaction details before they approve it.

Moreover, Ledger hardware wallets offer support to Hiro Wallet by which users can pair them and start stacking STX and delegating them to trustworthy validators.

Now you know everything about smart contracts on Bitcoin, youre ready to dive in. So get yourself a Ledger, connect to Ledger Live, and start exploring the Bitcoin ecosystem. The time for education is now, and accessing Bitcoin from the Ledger ecosystem is secure and easy to understand.

Continue reading here:

Bitcoin Smart Contracts and Apps: Do They Even Exist? - Ledger

Read More..

Are Young Sheldon and The Big Bang Theory connected? – Dexerto

Gabriela Silva

Published: 2023-11-23T17:25:57 Updated: 2023-11-23T17:26:08

The CBS sitcom, The Big Bang Theory, spanned 12 seasons on TV with fans adoring the cast of comedic characters like Sheldon Cooper (Jim Parsons). With the introduction of Young Sheldon in 2017- is the series connected to The Big Bang Theory?

A group of scientists and friends enthralled audiences for twelve seasons. The Big Bang Theory focused on Mensa-fied best friends and roommates Leonard and Sheldon. As physicists who work at the California Institute of Technology, they know everything about quantum physics. Also the science behind Back to the Future.

Article continues after ad

But the sitcom also delved into their everyday conundrums and struggles. Like getting girlfriends, understanding social cues, and their dynamics with their other friends and co-workers.

Article continues after ad

The Big Bang Theory was well-loved until 2019. In 2017, CBS aired Young Sheldon starring a character that was oddly familiar. With a genius mind, a young boy living in Texas tries to manage a normal life, while his family tries to understand his vast intellect.

Yes, Young Sheldon is connected to The Big Bang Theory as it serves as a prequel spinoff focusing on Sheldon Coopers childhood in Texas with his family.

Article continues after ad

Out of all the characters in The Big Bang Theory, Sheldon was the most peculiar as he lacked more social cues than his friends. It aroused curiosity about how he was brought up and how his family dealt with his intellectual mind. Fans of the sitcom would remember that Sheldons twin sister Missy and his brother George do make cameo appearances in a few episodes.

Subscribe to our newsletter for the latest updates on Esports, Gaming and more.

Article continues after ad

The plot from CBS reads, For young Sheldon Cooper, it isnt easy growing up in East Texas. Being a once-in-a-generation mind capable of advanced mathematics and science isnt always helpful in a land where church and football are king. And while the vulnerable, gifted, and somewhat naive Sheldon deals with the world, his very normal family must find a way to deal with him. His father, George, is struggling to find his way as a high school football coach and as a father to a boy he doesnt understand. Sheldons mother, Mary, fiercely protects and nurtures her son.

Article continues after ad

In Young Sheldon, fans get to see a young Sheldon go through his awkward years. He tries to understand his familys emotions and his relationship with his twin sister and older brother. All the while, fans will see the moments that shaped the character fans met in the original sitcom. But the prequel spinoff also gives some context to details in The Big Bang Theory about Sheldons family. Having been raised by a hard-working father and a religious and devoted mother.

Article continues after ad

Playing the younger version of Jim Parsons character is Iain Armitage. Fans may not know the character is based on Parsons real-life family member. The family patriarch is Lance Barber, and Zoe Perry is the matriarch. In the role of Sheldons grandmother, Meemaw is Annie Potts, with Missy played by Raegan Revord and Montana Jordan as Geroge.

Article continues after ad

Young Sheldons first five seasons will soon hit Netflix, with plans to develop its final season. Ending Sheldon and the Cooper family journey with Season 7.

Young Sheldon Seasons 1-5 arrive on Netflix US on November 24, and you can check out more of our coverage below:

Article continues after ad

Read more from the original source:

Are Young Sheldon and The Big Bang Theory connected? - Dexerto

Read More..

Smart Contracts: Understanding and Mitigating Vulnerabilities | by … – Medium

In the fascinating world of blockchain technology, smart contracts are like the bricks that build a house. They bring automation, transparency, and trust to decentralized applications. But just like a brick house isnt immune to a storm, smart contracts have their own set of vulnerabilities. Lets take a journey into the hidden dangers that threaten these digital agreements and explore how we can make them stronger.

Re-entrancy Attacks: The Unseen Threat

Imagine a function in a contract being called over and over again before it can finish its previous tasks. This is what we call a re-entrancy attack. Its like a silent intruder that can change the contracts state, possibly draining funds or causing unexpected behaviour. Remember the DAO hack in 2016? That was a re-entrancy attack. It taught us a valuable lesson about the importance of handling state changes and external calls carefully.

Unchecked External Calls: The Weak Spot

Smart contracts often need to fetch data from outside sources, like other contracts or oracles. But if we dont check these external calls carefully, were leaving a door open for vulnerabilities. An unchecked response could mess up the contracts logic, compromise its integrity, and potentially lead to significant financial losses or system malfunctions.

Integer Overflows/Underflows: The Math Gone Wrong

When arithmetic operations in smart contracts arent handled properly, we can end up with integer overflows or underflows. These can result in unexpected calculations, potentially allowing attackers to manipulate values and disrupt the contracts intended functionality.

Access Control Issues: The Unlocked Door

If a smart contract doesnt have proper access controls, its like leaving your house door unlocked. Unauthorized users might get the ability to execute critical functions or change the contracts state. This can lead to unauthorized manipulation of data or functionalities, posing severe risks to the entire blockchain system.

Front-Running: The Race to the Front

Front-running is like a sneaky racer who exploits the predictability of transactions by changing their order within a block. Attackers can get ahead by executing transactions before others, potentially gaining an unfair advantage or disrupting the contracts intended execution flow.

Unchecked User Input: The Open Gate

If user inputs arent checked properly, its like leaving the gate open for various vulnerabilities. Improper handling of user inputs can lead to denial-of-service attacks, unexpected behaviour, or unauthorized access, compromising the security and stability of the contract.

Mitigating the Risks: Building a Stronger House

Understanding these vulnerabilities is like knowing where the weak spots in our house are. We can then work on strengthening these areas. By adopting secure coding practices, conducting comprehensive audits, and using established frameworks and tools like formal verification, we can significantly reduce these risks. Rigorous testing, continuous monitoring, and fostering a security-first mindset within the development community are key to building a stronger house, or in our case, a more secure smart contract.

More here:

Smart Contracts: Understanding and Mitigating Vulnerabilities | by ... - Medium

Read More..

Unraveling the Secrets of the Universe’s Most Energetic Cosmic Ray – AZoQuantum

A high-energy particle descends from space to Earth's surface, its origin and nature shrouded in mystery. While it might resemble a scene from science fiction, this scenario is an actual scientific occurrence supported by the investigations led by Associate Professor Toshihiro Fujii at the Graduate School of Science and Nambu Yoichiro Institute of Theoretical and Experimental Physics at Osaka Metropolitan University.

Ultra-high-energy cosmic ray captured by the Telescope Array experiment on May 27, 2021, dubbed Amaterasu. The detected cosmic ray had an estimated energy of 244 EeV, comparable to the most energetic cosmic ray ever observed. Image Credit: Osaka Metropolitan University/L-INSIGHT, Kyoto University/Ryuunosuke Takeshige

Cosmic rays, energetic charged particles stemming from galactic and extragalactic origins, encompass an array of energy levels.

Among these, exceedingly high-energy cosmic rays are exceptionally scarce, surpassing 1018 electron volts or one exa-electron volt (EeV). This level of energy stands roughly a million times greater than what even the most potent human-made accelerators have achieved.

Professor Fujii and an international team of scientists have dedicated their efforts to pursuing these space-originating rays through the Telescope Array experiment, which has been ongoing since 2008. This specialized cosmic ray detector comprises 507 scintillator surface stations, collectively spanning a vast detection area of 700 square kilometers in Utah, United States.

A significant breakthrough occurred on May 27, 2021, when the researchers identified a particle boasting an astonishing energy level of 244 exa-electron volts (EeV).

When I first discovered this ultra-high-energy cosmic ray, I thought there must have been a mistake, as it showed an energy level unprecedented in the last 3 decades.

Toshihiro Fujii, Professor, Graduate School of Science and Nambu Yoichiro Institute of Theoretical and Experimental Physics, Osaka Metropolitan University

The most energetic cosmic ray of 320 EeV observed in 1991 was dubbed the Oh-My-God particle.

Among various potential names for the particle, Professor Fujii and his colleagues reached a consensus on naming it "Amaterasu," drawing from the sun goddess central to Shinto beliefs and credited with playing a pivotal role in Japan's creation mythology.

The Amaterasu particle is as enigmatic as the Japanese goddess herself. The questions were raised about the origin and domain of the Amaterasu particle. The Amaterasu particle might illuminate the origins of cosmic rays.

No promising astronomical object matching the direction from which the cosmic ray arrived has been identified, suggesting possibilities of unknown astronomical phenomena and novel physical origins beyond the Standard Model. In the future, we commit to continue operating the Telescope Array experiment, as we embark, through our ongoing upgraded experiment with fourfold sensitivities, dubbed TAx4, and next-generation observatories, on a more detailed investigation into the source of this extremely energetic particle.

Toshihiro Fujii, Professor, Graduate School of Science and Nambu Yoichiro Institute of Theoretical and Experimental Physics, Osaka Metropolitan University

The research was published in the journal Science on November 24th, 2023.

The Telescope Array experiment is supported by the Japan Society for the Promotion of Science (JSPS) through Grants-in-Aid for Priority Area 431, for Specially Promoted Research JP21000002, for Scientific Research (S) JP19104006, for Specially Promoted Research JP15H05693, for Scientific Research (S) JP15H05741, for Science Research (A) JP18H03705, for Young Scientists (A) JPH26707011, and for Fostering Joint International Research (B) JP19KK0074.

It is also supported by the joint research program of the Institute for Cosmic Ray Research (ICRR), The University of Tokyo; and by the Pioneering Program of RIKEN for the Evolution of Matter in the Universe (r-EMU).

The study was funded by the US National Science Foundation awards PHY-1607727, PHY-1712517, PHY-1806797, PHY-2012934, and PHY-2112904; by the National Research Foundation of Korea (2017K1A4A3015188, 2020R1A2C1008230, 2020R1A2C2102800); by the Ministry of Science and Higher Education of the Russian Federation under the contract 075-15-2020-778, IISN project No. 4.4501.18, Belgian Science Policy under IUAP VII/37 (ULB), and Simons Foundation (00001470, NG).

The Telescope Array project receives partial support from grants within the joint research program of the Institute for Space-Earth Environmental Research at Nagoya University and the Inter-University Research Program of the Institute for Cosmic Ray Research at the University of Tokyo. Additionally, funding stems from the Dr Ezekiel R. and Edna Wattis Dumke, Willard L. Eccles, and George S. and Dolores Dore Eccles foundations.

The State of Utah's backing is facilitated through its Economic Development Board, while the University of Utah contributes through the Office of the Vice President for Research.

Source: https://www.omu.ac.jp/en/

Go here to read the rest:

Unraveling the Secrets of the Universe's Most Energetic Cosmic Ray - AZoQuantum

Read More..

Microsoft to invest $500 million to expand hyperscale cloud computing and AI in Quebec – MarketWatch

Published: Nov. 22, 2023 at 7:41 a.m. ET

Microsoft Corp. MSFT said Wednesday its planning to invest $500 million in expanding its hyperscale cloud computing and AI infrastructure in Quebec over the next two years. The software giant said it will increase the size of its local cloud infrastructure footprint across Canada by 750%. The company cited a study by Ernst & Young that found that Microsoft has more than 3,200 partners and cloud infrastructure accounts in Quebec and supports more than 57,000 jobs that contribute more than $6.4 billion annually to the regions GDP. The investment will accelerate the pace of AI innovation and enable Quebec organizations...

Microsoft Corp. MSFT said Wednesday its planning to invest $500 million in expanding its hyperscale cloud computing and AI infrastructure in Quebec over the next two years. The software giant said it will increase the size of its local cloud infrastructure footprint across Canada by 750%. The company cited a study by Ernst & Young that found that Microsoft has more than 3,200 partners and cloud infrastructure accounts in Quebec and supports more than 57,000 jobs that contribute more than $6.4 billion annually to the regions GDP. The investment will accelerate the pace of AI innovation and enable Quebec organizations to further build on the significant capacity already in place across the province including an existing datacentre region, launched in 2016, the company said in a statement. As part of the plan, Microsoft will offer training in AI skills to people in Quebec and will work with KPMG Canada on providing cybersecurity protections for businesses. The stock was up 1% premarket and has gained 56% in the year to date, while the S&P 500 SPX has gained 18%.

Original post:
Microsoft to invest $500 million to expand hyperscale cloud computing and AI in Quebec - MarketWatch

Read More..

Service Included, FinOps Foundation Counts Cost Of Cloud – Forbes

a restaurant in Paris on March 10, 2009. EU finance ministers reached a long-sought deal today to allow especially reduced sales, or value added, tax on certain services, including restaurants. Normally, EU countries cannot apply a VAT rate of less than 15 percent in order to avoid big price discrepancies across the EU single market, and allowing any exemptions requires unanimous backing from all member states. AFP PHOTO CAROLINE VENTEZOU (Photo credit should read Caroline VENTEZOU/AFP via Getty Images)AFP via Getty Images

Cloud has costs. Cloud computing has a cost in terms of staff training and the need to upskill software application development engineers with its new mechanics, syntax, architecture and structure. Cloud computing has a cost in relation to its environmental impact - a factor borne out of its need to run from server racks located in Cloud Service Provider (CSP) datacenters.

Cloud computing has a cost in the shape of cost savings, with IT infrastructure management, maintenance, updates, security provisioning and more all handled by the CSP hyperscaler platform (AWS, Google Cloud Platform, Microsoft Azure and the other vendor-specific clouds that also exist such as Oracles) of choice that an enterprise settles on. Cloud also has a cost in terms of migration charges as we move previously on-premises data and applications to the cloud - and of course, cloud has a cost because cloud computing isnt free.

With all those costs to consider, it is perhaps no surprise that weve seen the FinOps Foundation, established in recent times. A part of The Linux Foundation since 2020 (prior to which it was a standalone non-profit trade association) as a non-profit technology consortium, this group is focused on advancing the people and practice of FinOps to analyze and manage the cost of cloud.

In search of a formal definition of FinOps (which we have also discussed here), the foundations Technical Advisory Council states that, FinOps is an evolving cloud financial management discipline and cultural practice that enables organizations to get maximum business value by helping engineering, finance, technology and business teams to collaborate on data-driven spending decisions.

With cloud spending being the central focus of FinOps (although the other above-mentioned costs also forming an integral part of the miz) the FinOps Foundation has now launched the FinOps Open Cost & Usage Specification (FOCUS) version 1.0-preview. In terms of collaborators and supporters. AWS, Microsoft, Google Cloud and Oracle Cloud join large cloud spenders like CapitalOne, Walmart, Workday, Goldman Sachs and others as formal contributors to this specification i.e. all firms have aligned to this universal cloud billing specification.

This foundational project for the FinOps discipline is hoped to create an open specification for the presentation of cost and usage data. In other words, the team (the committee, the council, the board, the aligned organizations - everyone involved basically) have said that they are concerned with demystifying cloud spend and making it easier to show and report value. The goal of the FOCUS project is to normalize schema (the format and structure that information is presented in, such as that presented by a dataabse) and terminology for cost and usage data, helping organizations to achieve consistency and simplicity of understanding cost and usage across cloud and data sets.

We are establishing FOCUS as the cornerstone lexicon of FinOps by providing an open source, vendor-agnostic specification featuring a unified schema and language, said Mike Fuller CTO at the FinOps Foundation. With this release, we are paving the way for FOCUS to foster collaboration among major cloud providers, FinOps vendors, leading SaaS providers and forward-thinking FinOps enterprises to establish a unified, serviceable framework for cloud billing data, increasing trust in the data and making it easier to understand the value of cloud spend.

Senior director of engineering at Walmart Tim OBrien has said that he is glad to see the momentum gathering here. He thinks that having a vendor-neutral view of all cloud resources will enable teams to engage more proficiently with cloud partners and, in turn, better serve our users. Forrester analyst Lee Sustar predicts that the FOCUS standard will take off to normalize cloud billing in 2024. He is of the mind that this vendor-neutral multi-cloud view of resources will enable cloud engineers - but also non-IT stakeholders in finance and vendor management - to engage more competently with cloud operations teams.

The basics of good decision-making and execution requires clarity of information. FOCUS goes a long way to making that possible and we look forward to even greater innovation in the years ahead, said Anne Johnston, VP of Engineering Cloud Costs at Capital One. Corporate VP at Microsoft Fred Delombaerde echoes Johnstons positivity and says that his department is committed to working with the FinOps Foundation to bring together the knowledge and involvement of all major players in cloud billing to benefit all.

After the release of the initial FOCUS 0.5 at FinOps X (a FinOps conference, no relation to Twitter) this summer 2023, the formal release of FOCUS 1.0 includes many foundational advances, such as extending the specification for the presentation of both cloud and SaaS data sets and the inclusion of open source data converter implementations and data validators to accelerate adoption. To enable broader FinOps capabilities in the cloud, FOCUS 1.0 enables use cases for discount analysis, unit pricing allowing rate transparency, and detailed usage analysis while overlaying business context information via support for tags and resource metadata.

The FOCUS 1.0 release will also include a real-world practitioner use case library, with more than forty commonly performed FinOps use case examples tied to FOCUS data outputs, curated by experienced FinOps practitioners from massive cloud spenders including by Johnston of Capital One. These use cases offer a standardized approach to address common FinOps requirements, leveraging FOCUS data as the foundation. The entire library, along with comprehensive specifications and detailed instructions and SQL queries, is accessible online, simplifying navigation and ensuring users can easily find the resources they need, notes the foundation, in a technical statement.

Like cloud security, which many technology commentators will argue was somewhat overlooked at the outset of the cloud revolution, cost was never discussed as a major element of cloud over and above cloud providers claiming that they would save money on Capital Expenditure (CapEx) outlay. As we know count the pennies more closely and examine ALL the costs tabled at the start of this discussion, we can perhaps now really use FinOps to make cloud pay.

Its time to make cloud pay and service is included, literally.

I am a technology journalist with three decades of press experience. Primarily I work as a news analysis writer dedicated to a software application development beat. I have spent much of the last twenty years focusing on open source, analytics & data science, cloud computing, mobile devices & data management. I have an extensive background in communications starting in print media, newspapers and also television. If anything, this gives me enough hours of cynical world-weary experience to separate the spin from the substance, even when the products are shiny and new.

See the original post:
Service Included, FinOps Foundation Counts Cost Of Cloud - Forbes

Read More..