Page 1,096«..1020..1,0951,0961,0971,098..1,1101,120..»

Bitcoin maintains price above $30k as more investors hold their assets – CoinJournal

Key takeaways

Bitcoin has maintained its value above $30k as more investors hold their cryptocurrency assets.

AltSignals stage-2 presale has now surpassed 50%, with more than $1.1 million raised so far.

The cryptocurrency market has been stagnant for the last two weeks, with the prices of most coins and tokens trading within specific ranges.

Bitcoin, the worlds leading cryptocurrency by market cap, has maintained its value around the $30k region over the past few weeks.

Rising inflation figures in the United States and other parts of the world have led to the Federal Reserve hiking interest rates in recent months.

However, the Fed didnt increase interest rates last month, and investors are now optimistic that the US apex bank might decrease interest rates later this month.

The SU Consumer Price Index for June was 0.2%, lower than what market analysts had expected. With inflation declining in the US, the Fed might reduce its interest rates in the near term.

If that happens, the cryptocurrency market could experience a positive performance. At press time, theprice of Bitcoin stands at $30,566, up by less than 1% in the last 24 hours.

Data obtained from Glassnode revealed that the number of wallets holding at least one BTC had reached an all-time high of 1,008,737 million.

As more people are holding Bitcoin, investment in other cryptocurrencies could increase in the near term. Investors could be looking to enter new projects and make excellent returns.

One of the projects investors could consider is AltSignals.AltSignals is still in its presale stage and has already raised more than $1 million in its second presale round.

Before investing in AltSignals, it is advisable to know what the project is and the problem the developers intend to solve in the market.

AltSignals is a project that primarily targets traders. It is a trading platform that provides trading signals for stocks, forex, indices, cryptocurrencies, and CFDs.

Although AltSignals is still in its presale stage, the team has raised more than $1 million and will use the funds to develop its platform.

The stage-2 presale has raised more than $1 million so far, with the team targeting around $2 million in this round.

ASI, the native token of the AltSignals ecosystem, is currently sold for 0.01875 USDT. The token price could increase in the near term once the project officially launches and gets listed on cryptocurrency exchanges.

In their whitepaper, the AltSignals team revealed that they would use the funds to develop ActualizeAI, a solution that could make it easier for more people to enter the cryptocurrency trading scene.

ActualizeAI will be AltSignals fully automated solution that would make it easier for people to trade cryptocurrencies.

Visit the official AltSignals website to learn more about their presale.

Bitcoin and the broader cryptocurrency market have performed well since the start of the year. Year-to-date, Bitcoin is up by nearly 50%, outperforming the other major financial markets.

Market analysts predict that Bitcoins price could surge higher in the medium to long term. If that happens, other cryptocurrencies could record massive gains too.

One of the projects to look out for is AltSignals. Although it is still in its presale stage, AltSignals could be a project that changes the way traders approach the market.

If the development team delivers on its promise of a platform dedicated to traders,AltSignals could see an influx of traders on its platform, and that would result in ASIs price soaring higher over the next few months and years.

The launch of ActualizeAI could be a huge boost for AltSignals as the project seeks to attract more traders to the cryptocurrency ecosystem.

Read the rest here:
Bitcoin maintains price above $30k as more investors hold their assets - CoinJournal

Read More..

Audi – Edge Cloud 4 Production: IT-based factory automation enters … – Automotive World

Audi has been testing the local server solution Edge Cloud 4 Production (EC4P), a new method of IT-based factory automation, at Bllinger Hfe since July 2022.

Audi has been testing the local server solution Edge Cloud 4 Production (EC4P), a new method of IT-based factory automation, at Bllinger Hfe since July 2022. Starting in July 2023, this paradigm shift in Audis shop floor IT will be used for the first time in series production. At Bllinger Hfe, a local server cluster will control the worker support systems for two production cycles of the Audi e-tron GT quattro, RS e-tron GT, and Audi R8 models. In the future, the software-controlled, flexible, and scalable server solution will replace the decentralized control system that relies on high-maintenance industrial PCs. EC4P allows Audi to redeploy the computing power the production line requires to local data processing centers. In addition to this first application in series production, Audi is simultaneously adapting EC4P for other use cases in the Audi Production Lab (P-Lab).

EC4P uses local servers that act as data processing centers. They can process extensive production-related data with low latency and distribute it to the worker support systems, which indicate to employees which vehicle part to install. This approach eliminates the need for expensive, high-maintenance industrial PCs.

Our motto is software, not hardware, said Sven Mller and Philip Saalmann, Head and Co-head of the 20-member EC4P project team. EC4P enables the quick integration of software and new tools, whether for worker support, bolt control, vehicle diagnostics, predictive maintenance, or energy savings, explained Mller. Moreover, by eliminating industrial PCs on the line, EC4P mitigates the risk of malware attacks. Jrg Spindler, Head of Production Planning and Production Technology at Audi, emphasized the opportunities of EC4P: We want to bring local cloud solutions to production at our plants to take advantage of advances in digital control systems.

The server solution makes it possible to level out spikes in demand across all virtualized clients, speeding application deployment and ensuring more efficient use of resources. Production will be economized, particularly where software rollouts, operating system changes, and IT-related expenses are concerned. The flexible cloud technology also scales to adapt to future tasks. What were doing here is a revolution, announced Gerd Walker, Member of the Board of Management of AUDI AG Production and Logistics, at the launch of the first test phase. This first application in series production at Bllinger Hfe is a crucial step toward IT-based production.

In July 2023, Audi will integrate EC4P into series production following a test run in operation and preliminary testing. The small-scale series produced at Bllinger Hfe is ideal for testing ECP4s capacity as a control system and its use in large-scale production, said Saalmann. Audi is the first car manufacturer in cycle-dependent production to use a centralized server solution that redeploys computing power. Production cycles 18 and 19 at Bllinger Hfe, during which interior panels are installed and work is done on the underbody, use thin clients capable of power-over-Ethernet. These terminal devices get electrical power via network cables and obtain data through local servers.

By the end of the year, Audi will switch the worker support systems for all 36 cycles to the server-based solution. The architecture of the server clusters is designed to enable rapid scaling of EC4P in large-scale production. With EC4P, we are merging the fields of automation technology and IT to advance our practical use of the Internet of Things, said project manager Mller. This development will also create new employee roles at the interface of production and IT. For example, employees will use new applications to control automation technology. To this end, we are setting up a control team with overarching expertise to supervise and monitor the EC4P system around the clock. The team will work closely with the line employees.

Audi is studying how digital innovations affect the working environment as part of its Automotive Initiative 2025 (AI25) in collaboration with partners, including the Fraunhofer Institute for Industrial Engineering. The AI25 takes a holistic approach, giving equal consideration to technology, people, and Audis mission of advancing the digitalization of its production activities.

We work as a team to free up resources for new areas like battery and module production, said Spindler. New technologies and collaboration models will require our teams to acquire new skills. For that reason, our employees qualifications play an important role. With its longer cycle times, we view the Bllinger Hfe plant as a learning environment to roll out IT-based factory automation at larger sites such as Ingolstadt and Neckarsulm later.

One of the first use cases is controlling electrical commissioning activities at Audis German locations. After EC4P is proven in assembly, a further concrete step will be for the server solution to take over and monitor the programmable logic controller (PLC), which was previously hardware-based, in the automation cells in body construction. The project team is developing and testing the software alongside three manufacturers at the EC4P project house in Ingolstadt.

SOURCE: Audi

See the original post:
Audi - Edge Cloud 4 Production: IT-based factory automation enters ... - Automotive World

Read More..

Virtual Private Server: What It Is and How to Use It with AAAFx – Finance Magnates

A lot of ink has been spilled on VPS hosting and how to harness its potential. In trading, where every millisecond counts, efficiency and automation are critical to being successful. But how do traders incorporate efficiency and automation into their activity? Its simple: with the right technology suite. When it comes to technology and VPS hosting, AAAFx ticks all the boxes.

The award-winning brokerage offers exposure to 70 Forex pairs and hundreds of CFDs on stocks, indices, commodities and cryptocurrencies and a powerful technology arsenal to capture all the important price swings across different markets and timeframes. VPS hosting is also a core element of this arsenal, ensuring maximum platform uptime. Here we shed light on the importance of VPS hosting and how to use it.

In simple terms, VPS - an acronym for Virtual Private Server - is a permanent link connecting an individual trading terminal to the broader trading network.

For example, when using MT5 on their home computers, traders are connected to the standard trading network where their orders are executed. Having an active VPS ensures smooth connectivity. This is possible thanks to the latest-generation Cloud hosting capabilities that brokerage firms like AAAFx offer.

Keep Reading

Implementing cutting-edge VPS technology, AAAFx offers the best trading experience to traders around the globe, by improving trade execution speed and boosting traders local network capacity and enhancing connection stability, allowing them to execute trades quicker.

As an exclusive service, the broker offers VPS services completely free of charge to all its EU and global clients for a deposit of more than $5,000 or equivalent in another major currency. Traders depositing less than $5,000 or equivalent can also access the VPS for a modest monthly fee of $25, which will be automatically deducted from the account balance.

Designed to host a version of an operating system, a VPS can be controlled remotely from any device located in its vicinity. Working almost in the same way as a web hosting server, with the exception that it has the capability to directly host a desktop PC while maintaining its ability to function as if it were operating on its own server. As such, Virtual Private Servers are practically SaaS solutions, each with its specific amount of CPU power and space that ensures users enjoy the speed and connectivity they need.

The advantages of using a VPS when trading with AAAFx are multiple, including:

Confident use of trading robots, EAs and trading signals

The ability to trade from anywhere around the world, regardless of the local internet speed

The privilege of anonymity, privacy and enhanced security

To make the most of a trading VPS with AAAFx, traders must first make sure their equipment meets the following characteristics:

INTEL Processor to ensure full compatibility

1300 MB RAM

25 GB bandwidth

2TB disk space

Using a VPS with AAAFx is extremely simple. To start reaping the benefits of seamless trading, all you have to do is:

Connect to AAAFx VPS: If youre using a Windows VPS, the easiest and most direct way is to connect using the Remote Desktop Protocol (RDP). RDP is a Microsoft system that gives you access to a built-in client which can communicate with your VPS. To launch it, open the Start menu, type in remote desktop and open it

Enter your IP and login credentials in the designated space and click Connect

Install MT4 or MT5 on your Windows VPS

Thats it, youre all set! Are you ready to give the AAAFx VPS a test drive? Register now.

A lot of ink has been spilled on VPS hosting and how to harness its potential. In trading, where every millisecond counts, efficiency and automation are critical to being successful. But how do traders incorporate efficiency and automation into their activity? Its simple: with the right technology suite. When it comes to technology and VPS hosting, AAAFx ticks all the boxes.

The award-winning brokerage offers exposure to 70 Forex pairs and hundreds of CFDs on stocks, indices, commodities and cryptocurrencies and a powerful technology arsenal to capture all the important price swings across different markets and timeframes. VPS hosting is also a core element of this arsenal, ensuring maximum platform uptime. Here we shed light on the importance of VPS hosting and how to use it.

In simple terms, VPS - an acronym for Virtual Private Server - is a permanent link connecting an individual trading terminal to the broader trading network.

For example, when using MT5 on their home computers, traders are connected to the standard trading network where their orders are executed. Having an active VPS ensures smooth connectivity. This is possible thanks to the latest-generation Cloud hosting capabilities that brokerage firms like AAAFx offer.

Keep Reading

Implementing cutting-edge VPS technology, AAAFx offers the best trading experience to traders around the globe, by improving trade execution speed and boosting traders local network capacity and enhancing connection stability, allowing them to execute trades quicker.

As an exclusive service, the broker offers VPS services completely free of charge to all its EU and global clients for a deposit of more than $5,000 or equivalent in another major currency. Traders depositing less than $5,000 or equivalent can also access the VPS for a modest monthly fee of $25, which will be automatically deducted from the account balance.

Designed to host a version of an operating system, a VPS can be controlled remotely from any device located in its vicinity. Working almost in the same way as a web hosting server, with the exception that it has the capability to directly host a desktop PC while maintaining its ability to function as if it were operating on its own server. As such, Virtual Private Servers are practically SaaS solutions, each with its specific amount of CPU power and space that ensures users enjoy the speed and connectivity they need.

The advantages of using a VPS when trading with AAAFx are multiple, including:

Confident use of trading robots, EAs and trading signals

The ability to trade from anywhere around the world, regardless of the local internet speed

The privilege of anonymity, privacy and enhanced security

To make the most of a trading VPS with AAAFx, traders must first make sure their equipment meets the following characteristics:

INTEL Processor to ensure full compatibility

1300 MB RAM

25 GB bandwidth

2TB disk space

Using a VPS with AAAFx is extremely simple. To start reaping the benefits of seamless trading, all you have to do is:

Connect to AAAFx VPS: If youre using a Windows VPS, the easiest and most direct way is to connect using the Remote Desktop Protocol (RDP). RDP is a Microsoft system that gives you access to a built-in client which can communicate with your VPS. To launch it, open the Start menu, type in remote desktop and open it

Enter your IP and login credentials in the designated space and click Connect

Install MT4 or MT5 on your Windows VPS

Thats it, youre all set! Are you ready to give the AAAFx VPS a test drive? Register now.

Read more:
Virtual Private Server: What It Is and How to Use It with AAAFx - Finance Magnates

Read More..

Supercloud comes to the supermarket: How growing interest in … – SiliconANGLE News

The growth and success of computing has been a story about creating ways to make cross-platform connections.

The rise of hypertext markup language, or HTML, as a common language for web page creation in the 1990s allowed a broad audience to fuel internet growth. Bluetooth emerged in the early 2000s as a short-range wireless standard that enabled cross-device communication.

This same story is now being written in the cloud world, specifically the supercloud, a hybrid and multicloud abstraction layer that sits above and across hyperscale infrastructure. As theCUBE, SiliconANGLE Medias livestreaming studio, prepares to hold its Supercloud 3 eventon July 18, the landscape is continuing to evolve as a growing number of major companies are taking steps to build cross-cloud services. This is being driven by customers who prefer having multiple clouds yet are frustrated by the challenge of having to manage them.

Its too complicated to really take advantage of multicloud to the degree theyd like without engaging outside talent, said Dave Vellante, industry analyst for theCUBE, SilconANGLE Medias livestreaming studio, in a recent Breaking Analysis post. Thats an opportunity, thats supercloud a mesh of multiple clouds that are interconnected and managed as a single entity. Cross-cloud simplification is needed and will deliver business benefits in terms of getting more done with less, faster and more securely.

The technology driving supercloud has advanced since theCUBE held its inauguralSupercloud event last August. Soon after the event, VMware Inc. unveiled new solutions for its Cross-Cloud Services portfolio, including Cloud Foundation+ for managing and operating full stack hyperconverged infrastructure in data centers and two projects designed to provide multicloud networking and advanced cross-platform security controls.

Cloudflare Inc. has built the equivalent of a distributed supercomputer that connects multiple clouds and can allocate resources at a large scale. In June, the company announced an agreement with Databricks Inc. to more simply share and collaborate on live data across clouds.

The technology that will make up a supercloud is getting better constantly: AIOps, cross-cloud security, cross-cloud data management, etc., said David Linthicum, chief cloud strategy officer of Deloitte Consulting, in an interview for this story. Suitable investments have been made in that tech over the past year.

Where does this leave the major cloud providers such as Amazon Web Services Inc., Microsoft Corp. and Google LLC? The hyperscalers have provided the foundation on which superclouds were built, such as in the case of AWS serving as the host platform for Snowflake Inc.

There have been signs of interest among hyperscalers in providing cross-cloud services, such as Microsofts continued partnership with Oracle Corp. to deliver streamlined access to Oracle databases for Azure customers via Oracle Cloud Infrastructure. In late May, Google LLC. announced Cross-Cloud Interconnect services through Google Cloud.

Yet there is a growing belief among some analysts that the hyperscalers will ultimately have to do more.

The hyperscalers should be more focused on this space, in my opinion, Linthicum said. I think it is an opportunity for them, as well as a moment to invest in existing supercloud players, which are smaller companies.

Some of those smaller companies are working on solutions for architecting at the edge. This emerging field of edge architecture for supercloud represents a growth opportunity for dealing with the volume and scale of connected devices and decisions around where data will be processed.

Its no longer a uniform set of compute and storage resources that are available at your disposal, said Priya Rajagopal, director of product management at the database company Couchbase Inc., in an interview with theCUBE Youve got a variety of IoT devices. Youve got mobile devices, different processing capabilities, different storage capabilities. When it comes to edge data centers, its not uniform in terms of what services are available.

Couchbases NoSQL database technology powers complex business applications and its Capella offering provides a cloud-hosted database as a service that is available on AWS, Microsoft Azure and Google Cloud.Couchbases technology offers an intriguing view of how supercloud can help solve connectivity issues at the edge. Data sync at the edge is a challenge when low network connectivity can be common in remote locations.

Couchbase solves this problem through the use of global load balancers that can redirect traffic in a connectivity failure. Applications continue to run and then are automatically synced to backend servers when connectivity resumes.

I think once you start going from the public cloud, the clouds there scale, said Muddu Sudhakar, co-founder and CEO of Aisera Inc., in an interview on theCUBE. The lack of computer function will kick in. I think everything should become asynchronous. I think as long as algorithms can take that into the edge, I think that superclouds can really bridge between the public cloud to the edge.

While supercloud offers promise for solving the nettlesome problem of processing data at the edge, its potential for the delivery of key cross-cloud services, such as AI and security, remains a central focus for the enterprise.

Vellante has termed the evolving enterprise model as an AI-powered hybrid-multi-supercloud, and the role of artificial intelligence cannot be underestimated. The rise of ChatGPT, an OpenAI LP tool for generating articulate, human-like responses to a wide range of queries, has opened the gates for AI adoption on a massive scale.

Where the supercloud converges with ChatGPT and other AI tools will be in the delivery of services in an abstraction layer that deconstructs complexity, according to Howie Xu, former vice president of machine learning and artificial intelligence at Zscaler Inc. and now with Palo Alto Networks Inc. In order for me as a developer to create applications, I have so many things to worry about, and thats complexity, said Xu, in an interview on theCUBE. But with ChatGPT, with the AI, I dont have to worry about it. Those kinds of details will be taken care of by the underlying layer.

As businesses increasingly put data foundations in place to take full advantage of hybrid multicloud environments, management of this infrastructure and the massive amount of information required by AI tools will create further demand for cross-cloud connectivity.

The growth of AI will drive more use cases that span clouds, Linthicum said. These AI systems need vast amounts of data to work well, and that data will be in any number of cloud platforms and needs to be accessed. Since the relocation of data is typically not cost viable, the data will be leveraged where it exists and thus the need to manage data integration at the supercloud level. This will drive tremendous growth.

Perhaps one of the most difficult services to deliver in a supercloud model is security. This is because the cloud is complex and there is a lack of visibility for workloads operating in software containers that drive modern applications.

In theCUBEs Supercloud 2 event in January, Piyush Sharma, founder of Accurics (acquired by Tenable Inc.), called for common standards that would help implement consistent security practices across cloud models.

I think we need a consortium, we need a framework that defines that if you really want to operate in supercloud, Sharma said. Otherwise, security is going everywhere. [SecOps] will have to fix everything, find everything its not going to be possible.

Not long after Sharma appeared on theCUBE, there was movement toward implementing the common framework he described. AWS, Splunk Inc. and over a dozen other firms announced the launch of the Open Cybersecurity Schema Framework.

The goal of OCSF is to streamline the processing of data after cyberattacks and reduce the amount of manual work required to share information between security tools. In November, Cribl Inc. announced its support for OCSF and the use of Cribl Stream by AWS customers for converting data from any source into the OCSF format.

Security is challenging because of the complexity of the security models that run within each native cloud provider, Linthicum said. You somehow need to figure out how to abstract and automate those services so they can function with some cross-cloud commonality. This needs to be a focus for the supercloud tech providers, even the hyperscalers.

Over the past year, supercloud has moved beyond being merely a platform built by others successfully on the hyperscalers into a new phase where enterprises are clamoring for unified cross-cloud services and companies are lining up to provide them. With a nod to the author Norman Mailer, supercloud has come to the supermarket.

TheCUBEis an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate thecontent you create as well Andy Jassy

THANK YOU

Originally posted here:
Supercloud comes to the supermarket: How growing interest in ... - SiliconANGLE News

Read More..

Airlines at Munich Airport streamline IT with Amadeus cloud … – Airline Routes & Ground Services

Munich Terminal 1 Airline Club, a group of carriers operating from terminal 1 in Munich, Germanys second-largest airport, has adopted Amadeus cloud technology to simplify and improve the passenger service operations of its members.

The club is a group of airlines responsible for selecting and managing shared technology at key service points like check-in and boarding.

Providing IT infrastructure that multiple airlines share at the airport can be complex. Previously, multiple costly network links needed to be maintained between each airline and Munich airport, with computing happening on hard-to-maintain servers located at the terminal.

Passenger service agents connected to these local servers with energy-intensive traditional PCs in order to access multiple airline systems.

With the objective of improving the efficiency of this shared infrastructure, Munich T1 Airline Club has now migrated 330 workstations at check-in counters, boarding gates and lost & found desks to the Amadeus Airport Cloud Use Service (ACUS).

The move eliminates the need for local servers and costly legacy networks. Instead, agents can now access any Departure Control System they need using energy-efficient thin client machines, which connect to the cloud using a single cost-effective internet link.

Patrik Toepfner, Chairman of the Munich T1 Airline Club, Munich Airport said: Weve selected Amadeus as our partner for shared infrastructure because its cloud model greatly simplifies the technology we use at the airport.

We are confident this choice will streamline our operations and improve the overall travel experience for passengers at Munich terminal 1.

Yannick Beunardeau, SVP Airport & Airline Operations EMEA, Amadeus added: A growing number of airports and airlines are recognizing the simplicity of accessing passenger service technology from the cloud.

With this modern approach, agents can focus on delivering the best possible service to passengers using any airline system they need through an internet browser.

Were seeing specialist software at airports become more like the simple consumer applications we use in our personal lives and that can only be a good thing.

See the original post:
Airlines at Munich Airport streamline IT with Amadeus cloud ... - Airline Routes & Ground Services

Read More..

Seven Things to Love About Arc-enabled SQL Managed Instances – StorageReview.com

As a follow-up to our recent Azure Arc-enabled Managed Services article, we continued exploring the power of Azure Arc and Azure Stack HCI with Microsoft and Intel partner DataON. We quickly realized what they deliver and one use case stood out: Azure Arc-enabled SQL Managed Instance. Arc-enabled SQL Managed Instance is a Platform-as-a-Service (PaaS) that uses the latest SQL Server (Enterprise Edition) database that is patched, updated, and backed up automatically. And for business-critical applications, Arc-enabled SQL Managed Instance has built-in high availability.

As a follow-up to our recent Azure Arc-enabled Managed Services article, we continued exploring the power of Azure Arc and Azure Stack HCI with Microsoft and Intel partner DataON. We quickly realized what they deliver and one use case stood out: Azure Arc-enabled SQL Managed Instance. Arc-enabled SQL Managed Instance is a Platform-as-a-Service (PaaS) that uses the latest SQL Server (Enterprise Edition) database that is patched, updated, and backed up automatically. And for business-critical applications, Arc-enabled SQL Managed Instance has built-in high availability.

As we explored Azure Arc-enabled SQL Managed Instance, we discovered several unique, interesting, or powerful features. Those elements are expanded upon below.

Early on, companies discovered the power of the Azure public cloud and the services it could provide. However, for certain workloads, there is a requirement to keep them on-premises for compliance reasons. Azure Stack HCI addresses the regulatory requirements by using the power and services offered by Azure (including Arc-enabled SQL Managed Instance), allowing those workloads to run on the companys hardware in a location of its choosing.

DataON, one of the companies we partner with, was an early adopter of these technologies and has helped us better understand them.

With Azure Arc, customers can view and manage their applications and databases consistently with a familiar toolset and interface, regardless of where these services runfrom on-premises to multi-cloud to edge.

Now every Azure Stack HCI cluster node is Arc-enabled when registering a cluster with Azure. This means that all these powerful Azure management capabilities are available for your Azure Stack HCI nodes.

Arc allows customers to deploy, manage, and maintain an Arc-enabled SQL Managed Instance.

At the start of its development, Microsoft prioritized security when creating Azure Stack HCI, Arc, and Arc-enabled SQL Managed Instance. Microsoft and Intel have collaborated to provide a comprehensive security solution with Azure Stack HCI, covering the entire IT infrastructure. Theyve also incorporated Azure Arc to extend Azure-based security to hybrid and multi-cloud environments. Intels built-in security and extensions further reinforce this solution, ensuring complete protection from silicon to the cloud.

Intels security measures ensure devices and data are trustworthy, while also providing workload and encryption acceleration. This allows for secure hardware-isolated data protection and software reliability in order to safeguard against cyber threats.

Azures platform has integrated security tools and controls that are readily accessible and user-friendly. DevOps and Security Centers native controls can be customized to safeguard and supervise all cloud resources and architecture tiers. Microsoft has developed Azure using industry-standard zero-trust principles, which involve explicit verification and the assumption a breach has occurred.

Security begins at the hardware level. The use of a Secured-core Server and a dashboard, available through Azure Stack HCI, enables hardware verification and auditing to ensure that the server meets the requirements for Secured-core.

Engaging with DataON (an Intel Platinum Partner) ensures the hardware base for an on-premises deployment of Azure Stack HCI uses the latest Intel-based servers to meet Secured-core server requirements. TPM2.0, Secure boot, Virtualization Based Security (VBS), Hypervisor-protected Code Integrity, Pre-boot DMA protection, and DRTM protection are some security features provided by Intel-based servers and verified by Azure Stack HCI.

Arc-enabled SQL Managed Instance leverages Kubernetes (K8s) to host the SQL instance and provide additional management capabilities for those SQL instances. K8s is a proven technology (it has been around for about a decade) in the data center, and by utilizing it, Microsoft capitalizes on its features and functions and its powerful and rich ecosystem.

Arc-enabled SQL Managed Instance hides the complexity of running containers through dashboards and wizards while allowing others to work directly with K8s.

The licensing costs for your Arc-enabled SQL Managed Instance are calculated and displayed as the instance is configured, revealing how much the database will cost before deployment. This also allows customers to perform what-if calculations and weigh the trade-offs when deciding what to deploy. For example, you can determine if you want one, two, or three replicas for high availability or any other attributes that Arc-enabled SQL Managed Instance can provide. Having these cost insights prevents any surprises at the end of the month and allows lines of business to configure their instances to accommodate their budgets.

As a bonus, if you already have a SQL Server license, you can use the Azure Hybrid benefit to save on licensing costs.

As Azure Arc is policy-driven, an administrator or even the end user of a database can create a new SQL managed instance using the Azure web interface. Azure Stack HCI aggregates all the compute and storage of the servers under its control. So creating a new database entails selecting what attributes are needed, but not having to decide which individual, discrete components are used for hosting.

In just a few minutes of deployment, a highly-available Arc-enabled SQL Managed Instance with built-in capabilities such as automated backups, monitoring, high availability, disaster recovery, etc., will be ready for use.

To consume the database, Arc-enabled SQL Managed Instance provides a list of connection strings for common programming languages. This is a small change, but it can save a lot of frustration for programmers looking to connect to it.

Using Microsofts fully automated Azure Data Migration Service, moving a database to the Azure Stack HCI as an Arc-enabled SQL Managed Instance is a snap. Even for skilled, experienced professionals, migrating to a database can be an anxiety-ridden prospect. Microsoft created a wizard to guide users through the process, removing the stress of doing it yourself or the expense of contracting it out.

More often than not, monitoring a database is an afterthought, an additional cost, or neglected due to its complexity or availability. Microsoft made a bold move by including an open-source monitoring stack that features InfluxDB and Grafana for metrics and Elastic and Kibana for Logs for its Arc-enabled SQL Managed Instances.

We were surprised and delighted that Microsoft decided to use well-regarded open-source products that are easily extensible for monitoring. For example, Arc provides a Grafana Arc-enabled SQL Managed Instance dashboard with widgets that display key performance indicators and individual metrics.

A Grafana dashboard is provided for the hosts as well.

In retrospect, we should have titled this article The Seven Things We Loved Most about Arc-enabled SQL Managed Instances, running on Azure Stack HCI, with Arc Integration on a DataON provided Secure Intel Based Server as each of these products builds on and complements the other.

SQL Managed Instance provides an easy migration or creation of a database presented and consumed as a PaaS. Azure Stack HCI allows Arc-enabled SQL Managed Instance and other Azure services to run on-premises. Arc allows Azure Stack HCI and Azure in the cloud to be managed from the same web-based interface. DataON is a valued Microsoft and Intel partner that provides hardware to run Arc-enabled SQL Managed Instance in a customers data center, in a remote office, or on the edge. Intel-based servers offer a secure foundation for this solution.

Looking back at this last paragraph, it seems like there are a lot of moving pieces in this solution, but they fit together so well they seem to be a single solution. Perhaps an analogy to this would be the automobile. Although an automobile comprises many complex subsections, it presents itself as something you sit in and drive with all the underlying complexity surfacing up through a single interface.

Arc-enabled SQL Managed Instance

Azure Stack HCI

DataON

This report is sponsored by DataOn Storage. All views and opinions expressed in this report are based on our unbiased view of the product(s) under consideration.

Engage with StorageReview

Newsletter|YouTube| PodcastiTunes/Spotify|Instagram|Twitter|TikTok|RSS Feed.

Follow this link:
Seven Things to Love About Arc-enabled SQL Managed Instances - StorageReview.com

Read More..

Akamai Grows Connected Cloud Effort with New Sites and Services – ITPro Today

Akamai continued to advance its cloud ambitions this week, with the announcement of new services and cloud locations around the world.

For much of its history, Akamai was primarily known as a content delivery network (CDN) and security service provider. That changed in 2022, when Akamai acquired alternative cloud provider Linode in a $900 million deal. Earlier this year, Akamai outlined its Connected Cloud strategy, which ties together its CDN and edge network assets with its Linode public cloud footprint, an approach designed to create more value for end users.

Related: 5 Cloud Cost Optimization Best Practices You Might Have Missed

With this week's news, Akamai is delivering on some of its Connected Cloud promises by expanding its public cloud sites and services. The new services include premium cloud instances, more object storage capacity, and a new global load balancer. The five new sites are located in Paris; Washington, D.C.; Chicago; Seattle; and Chennai, India.

Hillary Wilmoth, director of product marketing, cloud computing, at Akamai, told ITPro Today that the locations of new sites were selected based on customer feedback and to bring computing services closer to users as part of the Akamai Connected Cloud strategy.

Related: 6 Tips for Controlling Your Cloud Costs in a Recession

"These sites start to marry the cloud computing of Linode with the CDN scale of Akamai," she said.

A primary element of any public service is some form of virtual compute instances. As part of the new updates, Akamai introduced what the company refers to as "premium" instances.

Premium instances guarantee compute resources, a minimum processor model (currently an AMD EPYC 7713), and easy upgrades to newer hardware as it becomes available, Wilmoth explained.

"Premium CPU instances enable businesses to design their applications and infrastructure around a consistent bare minimum performance spec," she said.

Storage capacity is also getting a big boost. Akamai is now expanding its object storage bucket size to a maximum of 1 billion objects and 1 petabyte of data per storage bucket, representing a doubling of prior limits.

The other new piece of Akamai's public cloud service is the Akamai Global Load Balancer. The new load balancer expands on capabilities that Linode had been offering with its NodeBalancers service. Linode NodeBalancers direct traffic from the public internet to instances within the same data center, Wilmoth said. For example, a NodeBalancer can distribute traffic evenly between a cluster of web servers.

In contrast, she said the Akamai Global Load Balancer provides load balancing based on performance, weight, and content (HTTP/S headers, query strings, etc.) while being able to support multi-region or multicloud deployments, independent of Akamai.

The new sites and services are all part of the momentum that Akamai has been building since its acquisition of Linode.

Wilmoth said the vision is to build a cloud for the future that challenges the existing centralized design of current cloud architectures. Looking forward, she said to expect the Akamai Connected Cloud to include additional core sites as well as distributed sites designed to bring powerful compute resources to hard-to-reach locations. Akamai is also planning to expand capabilities and capacity for object storage, including multi-cluster support for horizontal scaling and automated bucket placement to optimize resource utilization.

"We'll continue to support open source, cloud-native, and partner integrations in pursuit of portable cloud workloads that align with best practices for multicloud," Wilmoth said.

About the author

Here is the original post:
Akamai Grows Connected Cloud Effort with New Sites and Services - ITPro Today

Read More..

Generative AI & the future of data centers: Part VII – The Data Centers – DatacenterDynamics

Digital Realty's CEO and more on what generative AI means for the data center industry

A potential shift in the nature of workloads will filter down to the wider data center industry, impacting how they are built and where they are located.

Digital Realtys CEO Andy Power believes that generative AI will lead to a monumental wave of demand.

It's still new as to how it plays out in the data center industry, but it's definitely going to be large-scale demand. Just do the math on these quotes of spend and A100 chips and think about the gigawatts of power required for them.

When he joined the business nearly eight years ago we were moving from one to three megawatt IT suites, and we quickly went to six to eight, then tens, he recalled. I think the biggest building we built was 100MW over several years. And the biggest deals we'd sign were 50MW-type things. Now you're hearing some more deals in the hundreds of megawatts, and I've had preliminary conversations in the last handful of months where customers are saying talk to me about a gigawatt.

For training AI models, Power believes that well see a change from the traditional cloud approach which focuses on splitting up workloads across multiple regions while keeping it close to the end user.

Given the intensity of compute, you cant just break these up and patchwork them across many geographies or cities, he said. At the same time, you're not going to put this out in the middle of nowhere, because of the infrastructure and the data exchange.

These facilities will still need close proximity to other data centers with more traditional data and workloads, but the proximity and how close that AI workload needs to sit relative to cloud and data is still an unknown.

He believes that it will still be very major metro focused, which will prove a challenge because youre going to need large swaths of contiguous land and power, but its harder and harder to find a contiguous gigawatt of power, he said, pointing to transmission challenges in Virginia and elsewhere.

As for the data centers themselves, plain and simple, it's gonna be a hotter environment, you're just going to put a lot more power-dense servers in and you're gonna need to innovate your existing footprints, and your design for new footprints, he said.

We've been innovating for our enterprise customers in terms of looking at liquid cooling. It's been quite niche and trial, to be honest with you, he said. We've also been doing co-design with our hyperscale customers, but those have been exceptions, not the norms. I think you're gonna see a preponderance of more norms.

Moving forward, he believes that you'll have two buildings that will be right next to each other and one will be supporting hybrid cloud. And then you have another one next to it that is double or triple the size, with a different design, and a different cooling infrastructure, and a different power density.

Amazon agrees that large AI models will need specialized facilities. Training needs to be clustered, and you need to have really, really large and deep pools of a particular capacity, AWS Chetan Kapoor said.

The strategy that we have been executing over the last few years, and we're going to double down on, is that we're going to pick a few data centers that are tied to our main regions, like Northern Virginia (US-East-1) or Oregon (US-West-2) as an example, and build really large clusters with dedicated data centers. Not just with the raw compute, but also couple it with storage racks to actually support high-speed file systems.

On the training side, the company will have specialized cluster deployments. And you can imagine that we're going to rinse and repeat across GPUs and Trainium, Kapoor said. So there'll be dedicated data centers for H100 GPUs. And there'll be dedicated data centers for Trainium.

Things will be different on the inference side, where it will be closer to the traditional cloud model. The requests that we're seeing is that customers need multiple availability zones, they need support in multiple regions. That's where some of our core capability around scale and infrastructure for AWS really shines. A lot of these applications tend to be real-time in nature, so having the compute as close as possible to the user becomes super, super important.

However, the company does not plan to follow the same dense server rack approach of its cloud competitors.

Instead of packing in a lot of compute into a single rack, what we're trying to do is to build infrastructure that is scalable and deployable across multiple regions, and is as power-efficient as possible, Kapoor said. If you're trying to densely pack a lot of these servers, the cost is going to go up, because you'll have to come up with really expensive solutions to actually cool it.

Googles Vahdat agreed that we will see specific clusters for large-scale training, but noted that over the longer term it may not be as segmented. The interesting question here is, what happens in a world where you're going to want to incrementally refine your models? I think that the line between training and serving will become somewhat more blurred than the way we do things right now.

Comparing it to the early days of the Internet, where search indexing was handled by a few high-compute centers but is now spread across the world, he noted: We blurred the line between training and serving. You're gonna see some of that moving forward with this.

While this new wave of workload risks leaving some businesses in its wake, Digital Realtys CEO sees this moment as a rising tide to raise all ships, coming as a third wave when the second and first still haven't really reached the shore.

The first two waves were customers moving from on-prem to colocation, and then to cloud services delivered from hyperscale wholesale deployments.

Thats great news for the industry, but one that comes after years of the sector struggling to keep up. Demand keeps out-running supply, [the industry] is bending over coughing at its knees because it's out of gas, Power said. The third wave of demand is not coming at a time that is fortuitous for it to be easy streets for growth.

Our largest feature ever looks at the next wave of computing

17 Apr 2023

For all its hopes of solving or transcending the challenges of today, the growth of generative AI will be held back by the wider difficulties that have plagued the data center market - the problems of scale.

How can data center operators rapidly build out capacity at a faster and larger scale, consuming more power, land, and potentially water - ideally all while using renewable resources and not causing emissions to balloon?

Power constraints in Northern Virginia, environmental concerns, moratoriums, nimbyism, supply chain problems, worker talent shortages, and so on, Power listed the external problems.

And that ignores the stuff that goes into the data centers that the customer owns and operates. A lot of these things are long lead times, with GPUs currently hard for even hyperscalers to acquire, causing rationing.

The economy has been running hot for many years now, Power said, And it's gonna take a while to replenish a lot of this infrastructure, bringing transmission lines into different areas. And it is a massive interwoven, governmental, local community effort.

While AI researchers and chip designers face the scale challenges of parameter counts and memory allocation, data center builders and operators will have to overcome their own scaling bottlenecks to meet the demands of generative AI.

We'll continue to see bigger milestones that will require us to have compute not become the deterrent for AI progress and more of an accelerant for it, Microsofts Nidhi Chappell said. Even just looking at the roadmap that I am working on right now, it's amazing, the scale is unprecedented. And it's completely required.

As we plan for the future, and try to extrapolate what AI means for the data center industry and humanity more broadly, it is important to take a step back from the breathless coverage that potentially transformational technologies can engender.

After the silicon boom, the birth of the Internet, the smartphone and app revolution, and cloud proliferation, innovation has plateaued. Silicon has gotten more powerful, but at slower and slower rates. Internet businesses have matured, and solidified around a few giant corporations. Apps have winnowed to a few major destinations, rarely displaced by newcomers. Each new smartphone generation is barely distinguishable from the last.

But those who have benefitted from the previous booms remain paranoid about what could come next and displace them. Those who missed out are equally seeking the next opportunity. Both look to the past and the wealth generated by inflection points as proof that the next wave will follow the same path. This has led to a culture of multiple false starts and overpromises.

The metaverse was meant to be the next wave of the Internet. Instead, it just tanked Meta's share price. Cryptocurrency was meant to overhaul financial systems. Instead, it burned the planet, and solidified wealth in the hands of a few. NFTs were set to revolutionize art, but rapidly became a joke. After years of promotion, commercial quantum computers remain as intangible as Schrodingers cat.

Generative AI appears to be different. The pace of advancement and the end results are clearly evidence that there are more tangible use cases. But it is notable that crypto enthusiasts have rebranded as AI proponents, and metaverse businesses have pivoted to generative ones. Many of the people promoting the next big thing could be pushing the next big fad.

The speed at which a technology advances is a combination of four factors: The intellectual power we bring to bear, the tools we can use, luck, and the willingness to fund and support it.

We have spoken to some of the minds exploring and expanding this space, and discussed some of the technologies that will power what comes next - from chip-scale up to data centers and the cloud.

But we have not touched on the other two variables.

Luck, by its nature, cannot be captured until it has passed. Business models, on the other hand, are usually among the easier subjects to interrogate. Not so in this case, as the technology and hype outpace attempts to build sustainable businesses.

Again, we have seen this before with the dotcom bubble and every other tech boom. Much of it is baked into the Silicon Valley mindset, betting huge sums on each new tech without a clear monetization strategy, hoping that the scale of transformation will eventually lead

to unfathomable wealth.

Higher interest rates, a number of high-profile failures, and the collapse of Silicon Valley Bank has put such a mentality under strain.

At the moment, generative AI companies are raising huge sums on the back of wild promises of future wealth. The pace of evolution will depend on how many can escape the gravity well of scaling and operational costs, to build realistic and sustainable businesses before the purse strings inevitably tighten.

And those eventual winners will be the ones to define the eventual shape of AI.

We do not yet know how expensive it will be to train larger models, nor if we have enough data to support them. We do not know how much they will cost to run, and how many business models will be able to bring in enough revenue to cover that cost.

We do not know whether large language model hallucinations can be eliminated, or whether the uncanny valley of knowledge, where AIs produce convincing versions of realities that do not exist, will remain a limiting factor.

We do not know in what direction the models will grow. All we know is that the process of growth and exploration will be nourished by ever more data and more compute.

And that will require a new wave of data centers, ready to meet the challenge.

13 Jul 2023

13 Jul 2023

13 Jul 2023

13 Jul 2023

13 Jul 2023

13 Jul 2023

Go here to see the original:
Generative AI & the future of data centers: Part VII - The Data Centers - DatacenterDynamics

Read More..

Ingine and QuoVadis announce partnership – Royal Gazette

Updated: Jul 12, 2023 07:23 AM

Bermudian-based cloud solutions: Gavin Dent, QuoVadis CEO and Fernando De Deus, Ingine CEO (Photograph supplied)

As Bermuda companies increasingly seek out the next generation in technologies, a new collaboration will leverage that by offering enhanced cloud-based IT solutions.

Managed services provider, Ingine, is partnering its IT services with QuoVadis, the provider of managed data-centre, co-location, and cloud hosting services to the local and international business communities.

Ingine will now offer a complete suite of cloud solutions to new and existing customers as a managed service to help them fully digitalise their workplace through tighter integration with existing applications, softwares, and IT management systems.

The company said the integration of these cloud solutions offers customers an end-to-end managed service, cloud migration expertise and access to Ingines extensive capabilities in managing complex technology environments.

Fernando De Deus, chief executive officer of Ingine, commented: Bermuda is fast becoming a hotspot for companies investing in the latest innovative technologies and our strategic partnership ensures they have the right infrastructure and security to be successful.

Weve seen many local businesses adopt cloud computing in an effort to become more agile, lower IT costs, and have the ability to scale so its important the solutions are strategically managed to avoid costly inefficiencies and security risks.

By joining forces, Ingine will be able to leverage QuoVadis' expertise and advanced infrastructure platform to deliver Bermuda-based cloud solutions which encompass servers, storage, networking, security, and trusted IT specialists.

This will help Bermuda-based companies to access the full benefits of a managed infrastructure service.

Whats more, we are working with Bermuda's first technology training centre, ConnecTech, to offer clients bespoke training to adopt new technologies effectively and drive operational excellence from within their teams.

Gavin Dent, QuoVadis chief executive, added: Theres no one-size-fits-all solution when investing in cloud-based infrastructure.

By offering our partnered services to Bermuda clients we are helping more businesses to thrive and remain secure.

We also believe that true transformation requires investment in a digital workplace which is why the decision to work collaboratively with ConnecTech is going to ensure that what we do is sustainable for local businesses.

Read the original here:
Ingine and QuoVadis announce partnership - Royal Gazette

Read More..

Cloud Server Market ICT Industry Global Latest Trends and Insights 2023 to 2030 IBM, HP, Dell – openPR

The global Cloud Server Market size was valued at $538.8 billion in 2022 and it is projected to reach $1,212.9 billion by the end of 2029, growing at a CAGR of 15.9%. Cloud Servers are expanding due to increased automation and agility, the need to deliver enhanced customer experience, increased cost savings and return on investment, rise in the adoption of remote work culture, growth in the demand for cloud-based business continuity tools and services. An increase in spending on cloud-based services, business expansions by large vendors across geographies to acquire untapped customer base, the proliferation of digital content and upsurge in internet usage, and need for disaster recovery and contingency plans are also expected to drive the market growth.

A Free Research Sample of Cloud Server Market is Available- https://www.infinitybusinessinsights.com/request_sample.php?id=1499107?utm_source=OP_PPS

In this age of digitization, organizations are forced to shift to the cloud for cost-effective and flexible data storage options on an on-demand basis. Thus, governments across countries are investing in Cloud Server delivery models. The cost of acquiring, establishing, running, and maintaining technological services is decreased via Cloud Server. Governments may considerably increase productivity by streamlining their technology operations due to Cloud Server, particularly reflected in the time it takes to process citizen-facing transactions. Cloud Server also enables governments to adapt quickly to user needs and grow public services as needed. IoT devices produce data that needs to be collected and processed either locally or remotely in a server. Remote data hosting and analytics are a more practical and cost-effective solution in many IoT applications.

Profitable players of the Cloud Server market are:

IBMHPDellOracleLenovoSugonInspurCISCONTTSoftlayerRackspaceMicrosoftHuawei

Have Any Questions Regarding Global Cloud Server Market Report, Ask Our Experts- https://www.infinitybusinessinsights.com/enquiry_before_buying.php?id=1499107?utm_source=OP_PPS

Product types of the Cloud Server industry are:

IaaS (Infrastructure-as-a-Service)PaaS (Platform-as-a-Service)SaaS (Software-as-a-Service)

Applications of this report are:

EducationFinancialBusinessEntertainmentOthers

Accomplishing best results in the business becomes easy with this effective Cloud Server Market study report. It becomes easy to handle business operations and attract more consumers. It helps business players to identify business risks and moving ahead in the business. Important market aspects are covered in this Cloud Server Market report to help newly entering key players to survive in the competitive marketplace. Experimenting with more consumers is easy with this Market report as it captures all the latest updates regarding market expansion. Improving product launch helps business owners to survive in the long run.

To Remain 'ahead' of Your Competitors, Request for a Free Sample- https://www.infinitybusinessinsights.com/request_sample.php?id=1499107?utm_source=OP_PPS

Essential regions of the Cloud Server market are:

- Cloud Server North America Market includes (Canada, Mexico, USA)- Cloud Server Europe Market includes (Germany, France, Great Britain, Italy, Spain, Russia)- Cloud Server Asia-Pacific Market includes (China, Japan, India, South Korea, Australia)- Middle East and Africa (Saudi Arabia, United Arab Emirates, South Africa)- Cloud Server South America Market includes (Brazil, Argentina)

Global Cloud Server Market Research FAQs

- What is the study period of this market?- What is the growth rate of Cloud Server Market?- What is Cloud Server Market size?- Which region has highest growth rate in Cloud Server Market?- Which region has largest share in Cloud Server Market?- Who are the key players in Cloud Server Market?

The Cloud Server market research report contains the following TOC:1 Report Overview1.1 Study Scope2 Global Growth Trends2.1 Global Cloud Server Market Perspective (2017-2030)2.2 Growth Trends by Regions2.3 Global Industry Dynamic3 Competition Landscape by Key Players3.1 Global Top Players by Revenue3.2 Global Cloud Server Market Share by Company Type (Tier 1, Tier 2 and Tier 3)3.3 Players Covered: Ranking by Cloud Server Revenue4 Global Breakdown Data by Provider4.1 Historic Cloud Server Market Size by Provider (2017-2023)4.2 Forecasted Cloud Server Market Size by Provider (2023-2030)5 Breakdown Data by End User5.1 Historic Cloud Server Market Size by End User (2017-2023)5.2 Forecasted Cloud Server Market Size by End User (2023-2030)6 North America7 Europe8 Asia-Pacific9 Latin America10 Middle East and Africa11 Key Players Profiles12 Analyst's Viewpoints/Conclusions13 Appendix

Browse Full Reports with TOC Here- https://www.infinitybusinessinsights.com/reports/global-cloud-server-market-global-outlook-and-forecast-2023-2030-1499107?utm_source=OP_PPS

Contact Us

Sales Co-OrdinatorInternational: +1-518-300-3575Email: inquiry@infinitybusinessinsights.comWebsite: https://www.infinitybusinessinsights.comFacebook: https://facebook.com/Infinity-Business-Insights-352172809160429LinkedIn: https://www.linkedin.com/company/infinity-business-insightsTwitter: https://twitter.com/IBInsightsLLP

About Us:

Infinity Business Insights is a well-known market research company that specializes in syndicated research, personalized research, and consultancy. We are committed to delivering data that perfectly fits our client's business needs, thanks to employees of highly qualified analysts and a depth of expertise across many industrial disciplines. We deliver unique resilience and integrated methods due to our interdisciplinary expertise and constant dedication to excellence. We strive constantly to find the most promising market prospects and provide useful information to help your business grow in the market. Our goal is to give customized solutions to multifaceted business challenges, allowing for a more streamlined decision-making process.

This release was published on openPR.

Visit link:
Cloud Server Market ICT Industry Global Latest Trends and Insights 2023 to 2030 IBM, HP, Dell - openPR

Read More..