Page 854«..1020..853854855856..860870..»

I went to Apples iPhone 15 launch 4 things you missed and the first is good news for your wallet… – The US Sun

GADGET news has been delivered thick and fast at this week's Apple event.

But there were four things you might have missed unless you were watching very closely.

The prices of gadgets are only on the big screen (check out our Apple event live blog) fleetingly.

And it's hard to know if you're getting a good deal unless you compare it to last year's products.

For Americans, the good news is that prices on the new iPhone 15 models didn't rise this year.

The iPhone 15 is $799, the iPhone 15 Plus is $899, and the Pro is $999.

Seemingly the only exception is the Pro Max for $1,199, versus last year's $1,099 model.

But Apple has increased the base storage from 128GB to 256GB for this unit, so the price is the same as last year's 256GB model.

In the UK, it's even better news.

The iPhone 15 is 799 and the iPhone 15 Plus is 899 both a 50 price cut versus the year before.

The iPhone 15 Pro is 100 less than last year's model at 999.

And the Pro Max is flat versus the year before at 1,199.

This was almost a blink-and-youll-miss-it moment but got huge cheers from the audience.

Apple is finally adding some higher-storage tiers for iCloud.

So if you're approaching the limits of your iCloud storage, you'll be able to upgrade to new 6TB and 12TB.

The bad news is that they don't come cheap.

For 6TB, you'll be paying $30 / 26.99 a month, and 12TB will cost a hefty $60 / 54.99 each month.

Of course these are iCloud+ plans, so you get bonus features like Hide My Email, Private Relay, and Family Sharing for cloud storage.

Another quick fire update came for the AirPods.

Like the iPhone 15, the new AirPods feature a USB-C port.

That means new MacBook, iPad, iPhone and AirPods models all use the same type of connector: USB-C.

This has unlocked a brilliant trick: charging your AirPods with your iPhone.

You can now use a USB-C to USB-C cable to connect an iPhone and AirPods to give your headphones a quick boost.

It's not clear how much iPhone charge this will drain yet, but it's a handy trick that could work in a pinch if your AirPods are out of charge.

Another fun upgrade comes with the Apple Watch Series 9 and Apple Watch Ultra 2.

Both smartwatches feature upgraded storage.

Back with the Series 5, Apple increased local watch storage to 32GB.

And now for the first time, the two new models feature 64GB of local storage.

It means you can cram even more music or podcasts onto the device and leave your iPhone at home when you're out for a run, for instance.

See more here:
I went to Apples iPhone 15 launch 4 things you missed and the first is good news for your wallet... - The US Sun

Read More..

First Mile looks to connect you from almost anywhere to the cloud – RedShark News

How do you connect to the cloud when youre not near a fixed network connection? First Mile is an interesting new option to bridge that gap.

For all its ubiquity, the cloud can be tricky to connect with. Any internet connection is, by definition, connected to the cloud, but not all connections are equal to the task of handling video content - especially the high bitrate flavours associated with video production.

For fixed-location production, most buildings in most cities have perfectly adequate - and sometimes very fast - internet connections. But the further you get from population centres, the harder it gets to find a suitable pipeline to the cloud. You might also want to use an entirely separate network for security and performance issues.

Its easy to be lulled into a false sense of available connectivity by smartphones, but these are subject not only to the laws of physics but also to the operational and commercial priorities of mobile networks.

You might have noticed that, sometimes, your mobile signal can come and go, even if youre not moving. That shouldnt happen, but one reason it might is that networks have a fixed capacity. At busy times, if more people are trying to connect than the network has capacity for, it will - counter-intuitively - *reduce* the power of its transmitter until it is only in contact with the number of phones it can cope with. For everyone else, its tough, mainly because these outages are entirely unpredictable. It can happen when theres a traffic accident on a major road, causing everyone to phone ahead to say they will be late.

Networks can be strong in some areas and weak in others. Sometimes, youll find that your phone works well in a location, but the data SIM youre using to access cloud services is connected to a network without coverage.

If your production workflow depends on cloud services, then you need more than this. Combining mobile networks and LEO (Low Earth Orbit) satellites can help, but no single mobile service can provide the sort of reliability you need if high bandwidth mobile connectivity is on your critical path. Nor can it match the security of a purposely designed network that always operates within a VPN tunnel.

If youre not already using cloud workflows, lets look at why you might want to. While early camera-to-cloud demonstrations were little more than a proof of concept, cloud workflows have now reached a level of proficiency where they are a viable option for any level of video production, and the advantages are enormous. The cloud is geographically agnostic. Its everywhere, subject only to a decent and reliable connection. It enables remote and collaborative working. You can set up teams across the globe (or across the street) and share media with them at any time.

With camera to cloud you can go from acquisition to publishing on social media - or even a news channel - within minutes. Atomos calls this From Lens to Likes, with good reason: if you can get your material published first, youll get the majority of the traffic. Youll get attention, engagement, and all the benefits of revenue that is proportional to views.

But there is still the question of how you connect to the cloud when even the fast mobile networks are designed for short, bursty, transactional data and not large, long media files. This is the problem that First Mile is designed to solve with scalable options for a wide range of operating scenarios. The companys mobile data products use professionally configured SIM cards, dedicated hardware and technical measures to ensure constant, high-bandwidth connectivity.

The easiest to understand is blending. Unless youre in a cave or a lead-lined room, youll probably get some kind of mobile signal. You might also be in the WiFi range, but the signals unreliable. Blending can spread the data load - and the risk - across multiple, diverse networks. To users, blending is transparent - it just feels like a fast, reliable connection. Whats happening under the hood is that software running on your comms device analyses the current state of multiple networks and allocates your data across them to smooth out differences in data rates and to give a net increase in the available bandwidth - and reliability.

With 5G, which is an entire family of technologies unified around the goal of improving bandwidth and latency, with the appropriate mobile network account - not usually available to consumers - its called network slicing, and it makes it possible to reserve a fixed portion of the network bandwidth for you and your team. It means that there will never be contention between multiple users as they cant encroach on your reserved patch of the data connection.

Beyond this, there are dozens of tricks of the trade and qualified users only tweaks and setups that are only available to companies with a deep level of competence in network configuration.

Frame.io is driving the industry towards a camera-to-cloud workflow, with companies like Atomos, Teradek and Sound Devices releasing hardware and software products that specifically embrace the cloud. Real-time cloud storage company LucidLink is among those underpinning remote, collaborative editing, bringing the prospect of cloud-only workflows closer to every filmmaker.

But if you have no fixed WiFi or physical network connection - usually the case if youre away from home or on location - then First Mile Technologies can make an incredible difference to your ability to use super-productive cloud production workflows. You can start *editing* while shooting, work collaboratively with your teams - wherever they are, and even deliver your edited content directly to social media or even a newsroom.

Check out First Mile Technologies at https://firstmile.tech/

Read the rest here:
First Mile looks to connect you from almost anywhere to the cloud - RedShark News

Read More..

Manage cloud waste and high costs with automation – TechTarget

Cloud cost management shouldn't be an afterthought for your organization. Manually analyzing usage and growth patterns, allocating costs and conducting cost snapshots are time-consuming activities, prone to human error.

Organizations can automate ways to detect, track and report abnormal cloud activities to keep your cloud spending within budget. The benefits of automating cloud cost optimization include the following:

Read about the manual tasks automation can take off your to-do list and the helpful automation tools to consider.

There are several cloud cost management tasks that teams should automate to reduce human error and improve efficiency.

Right-sizing instances enables organizations to eliminate overprovisioning and allocate resources optimally, reducing cloud waste. To identify underutilized or oversized instances, implement automation, and analyze performance metrics and utilization data. Some tools can recommend appropriate instance types based on workload requirements, as well as automatically resize instances to optimize cost and performance.

Discounted instances can also cut costs. Automated tools can analyze usage patterns and recommend the optimal number and type of instances. These tools can often track the expiration dates of existing reservations and provide alerts or automate the purchasing process.

Automating cost data collection from cloud service providers using scripts or third-party tools saves time. It doesn't require human intervention, while retrieving and consolidating cost data from multiple sources into a centralized system or dashboard.

Cloud cost allocation, especially chargeback, is too important to risk human error. Automating cloud cost allocation enables teams to automatically analyze usage data and predefine allocation rules to automatically assign costs to departments, projects or cost centers. This eliminates the need for manual data manipulation and enables organizations to deliver reporting automatically at a regular cadence -- every week or month.

Locking down policy enforcement with automation enables organizations to implement resource usage policies and rules for countering cloud waste. For example, organizations can set an automated policy to enforce tagging standards, ensuring all resources are correctly labeled for cost allocation and management purposes.

Cloud teams can shut down or scale down resources during nonbusiness hours or periods of low demand with scheduling features and automation scripts. Automating lifecycle policies can manage data retention, archiving and deletion for storage resources.

Automated reporting tools can generate cost reports, dashboards and visualizations based on predefined templates or customizable requirements. These tools eliminate the need for manual data manipulation and analysis. Automated tools help guide cloud teams to deliver appropriate and actionable data to their stakeholders.

Using automation features can help detect unusual spending patterns or unexpected cost increases. For example, automated systems can monitor spending against budget thresholds and send alerts or notifications to relevant stakeholders when costs exceed predefined limits. Additionally, there can be alerts for potential issues or misconfigurations, helping prevent costly leaks and enabling proactive cost management.

Despite the expertise of FinOps teams, making cloud cost optimization recommendations requires automation. Implementing AI-powered cost optimization tools enables teams to analyze historical cloud usage patterns, identify cost-saving opportunities and provide actionable recommendations.

Tools can help in the journey to automate cloud cost management. Consider the following:

View post:
Manage cloud waste and high costs with automation - TechTarget

Read More..

Migrating to the cloud transforms business – MIT Technology Review

To successfully migrate to the cloud and subsequently collaborate and deploy cloud technologies, Garcia stresses the importance of clear communication among employees as well as stakeholders. Involve application teams, service owners, end users early in the development and delivery of the strategy. Again, just bringing everyone along for the journey, I cannot overstate how important that is, says Garcia.

A hybrid approach to transformation that combines cloud migration with the retention of some applications, dedicated data centers, and intermediary migration environments can ensure cost effective and secure operations. With enterprise-wide communication underpinning any successful transformation, Garcia outlines having a strong and flexible governance framework, collaborating with external digital partners, and adapting to agile ways of working as best practices for complex cloud migrations.

Looking to the cloud-enabled future, Garcia identifies the convergence of AI and edge computing, mounting progress in quantum computing, and the proliferation of IoT connected devices as transformative technologies that will drive forward better business outcomes.

I think that the convergence of edge computing and AI presents an exciting opportunity for the real-time data, a real-time low latency processing and decision making at the network edge, which is extremely critical for us, given all of the platforms, rigs that we have out across the globe, says Garcia.

This episode of Business Lab is produced in partnership with Infosys Cobalt.

Laurel Ruma: From MIT Technology Review, Im Laurel Ruma, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.

Our topic today is building a better cloud ecosystem. From partners to internal stakeholders, enterprises are meeting challenges by deploying and innovating with cloud computing solutions. The key is to work as a team and build talent resources to confidently adopt emerging technologies.

Two words for you: optimizing cloud.

My guest is Keisha Garcia. Keisha is the vice president of Digital Foundations at BP.

This episode of Business Lab is sponsored by Infosys Cobalt.

Welcome, Keisha.

Keisha Garcia: Thank you, Laurel. Im happy to be here.

Laurel: Well, lets start off. So, what has BPs move to the cloud been like? From your perspectives, what are the major benefits and challenges with cloud transformation?

Keisha: Yeah, so our journey from my perspective has been exciting, its been complex, its been a learning journey all the while. And its been long. Its been pretty long. Our journey started in 2013 and we were experimenting on cloud computing for email services and HR learning management systems. And then you fast-forward to 2016, and we were about 2% of our BP applications were on the cloud. As the company, we were conducting proof of concepts and determining what was the best approach and how do we do this at scale, large scale. In 2017, we adopted a cloud-first approach, meaning that anything that was any new hardware, any new system builds were going to be on the cloud, no more adding to our existing, at the time, eight mega data centers and over 107 different data centers throughout our regions throughout the world.

We had decided that we were no longer going to add anything, unless it was to the cloud. Or if it had to be on-prem, it had to be by exception only. And so that kind of motivated us to push along and got everybody along for the journey. But again, just getting all of our business and everyone else sold into the business or sold into cloud and cloud concepts and all of those things, given the fact that there were a lot of unknowns at the time, so working with just different vendor partners and trying to find as knowledgeable people as possible. So again, that just all fed to the complexity. By the close of 2022, we had gotten into our stride pretty well and had done quite a few, or a large part of our state, over 90% of our state, to be exact, we have migrated to our cloud environments, which enabled our faster product and service introduction and changing the BP digital operating model, which weve moved now to a product-led organization.

So our cloud migrations, I think some of the biggest benefits was it helped us optimize BPs technology stack. Of course, it increased our operational resilience. It introduced new network and data architectures, accelerated our technology adoption, helped to push the modernizing of our state and keeping those evergreen, it also assisted in the reduction of our CO2 emissions from our data centers. However, migrating to the cloud at the scale that weve had to, as large as our landscape is, again, as I said, was challenging, complex, and extensive because we had extensive legacy IT estate. And as I said earlier on, just hosted in the eight large mega data centers throughout the US, as well as in Europe, and then also just the myriad of data centers that we have across our regions. So the challenge, I cannot overstate, it has been that, but the gains have been great.

Laurel: So thats a great look at the past and the journey that BP has been on. So, what are some of the major cloud trends youre seeing today?

Keisha: So some of the major trends that were seeing today from a platforms perspective, see increasing numbers of organizations looking to consolidate their business applications on the cloud-based platforms, be more cloud native, have robust data and analytics platforms as well that will allow both real-time and on-demand access to key business information. Again, as we said, weve moved to a product-led organization, so were seeing that, of course, theres several companies that are doing the same. Digital teams are aligning to product-led operating models to ensure customer centricity and customer focus. And then also just putting that at the forefront. And then product development and enabling business and business-led prioritization and product delivery, which helps, again, with us aligning more to our business strategy. And given where we are going with moving to an integrated energy company and our transition with re:Invent, that has been huge for us.

Theres lots of markets that were tapping into, lots of things that were doing, and we have to get on board with the business to be able to be dynamic and be able to shift and be able to move, and to be able to provide a faster time to market with solutions. And so from that standpoint, being on a cloud platform, having all of the technology thats available to us to do that at pace and align with our business has been awesome for us. And Im seeing a lot more companies wanting to just share experiences and knowledge because theyre trying to do the same. Also, just the cloud native piece of that and cloud native enterprise is an organization that has aligned business and technology teams to help, again, modernize the estate, but we have to build more cloud native capability so that things can be more plug and play versus the huge build outs.

And then again, having to do a lot of the upgrading and all of the things that would come along with not building and being on top, or being with more in the cloud native state. Also, just again, part of our reinvention journey, this also enables climate action. Were seeing a lot of folks that are moving towards doing the things that align with the Paris Agreement, as well as all of the things that were doing along re:Invent. So decarbonizing digital as assets directly impacts about 2% of the global energy consumption. So therefore, it helps. Every bit helps. And so therefore, those are the things that were seeing. And were also, of course, moving there in that space to also assist with us getting to net zero. Theres also just being able to be more of a connected world. So 2023, this year and beyond, promises, opportunities for large scale industrial 5G, broadband based IoT usage and catapult connections for remote regions.

And so weve really started to build off that as well with building digital twins and all of the different things that were doing at our refineries, and then also on our rigs and platforms that will capitalize on just the cloud-based technology. So, theres quite a few things that were seeing that are trending, but things that were already in the works with and moving towards. And the last one is just the evolution of the CIO that were seeing. The CIO seems to have gone away. I dont see a lot of CIO titles anymore that are out there. And we definitely have moved away from that, as well as the way our organization is structured. And as I said earlier, aligning a lot more with product led organizations and making sure that we have technology leaders that are elevating their financial acumen, along with business prioritizations and outcomes, and bringing that business value and finding where those value streams are within your business strategies and aligning to those, and then evaluating and bringing about the technology that will be the catalyst and a differentiator for most businesses, and definitely ours.

Laurel: Thats quite a bit. And you mentioned this a little bit earlier, but how do you actually bring together a company to maintain and manage and optimize all of those business practices in the cloud? What are some of those best practices companies should be thinking about in order to collaborate and deploy cloud technologies?

Keisha: I think the biggest thing that we are seeing, or that I saw that were best practices and lessons learned and things, is just providing clear stakeholder communications. If you dont have your business on board and understand what it is that youre doing and why youre doing it and whats in it for them, its going to be hard. Its going to be really, really hard to do a mass migration as we have, an adoption of cloud, the way that we have. And I hate that covid-19 happened, but it definitely forced the business to really see the benefit because we were pretty much about, I want to say 50% on the cloud by the time, probably a little less than 50% on the cloud, by the time covid hit, but we were on the cloud for our major things. And our business really didnt skip a beat really with being able to connect from anywhere in the world.

And so they saw the benefit of that. They saw some of the things that we had talked about. But just having that clear outline of communication around what you will get, what you dont get, where you will be in each part of the journey, I cannot express how important that it was to do that. Involve application teams, service owners, end users early in the development and delivery of the strategy. Again, just bringing everyone along for the journey, I cannot overstate how important that is. Modify your operating model, your digital operating model specifically to align so that youre working more seamlessly together across different areas and allowing for the breakdown of expertise in particular areas and having that focus on that expertise and continuing to develop that and evolve that. Because technology, as always, but definitely in this space, changes extremely quickly. And so therefore, you have got to ensure that your people are getting as educated, updated with the skill sets as possible. And building on the benefits, a realization plan, was also key.

So those are some of the softer ones that I would think that people might overlook. Other ones, just the hybrid approach that we had, the hybrid approach to transformation. We recognized early on the need for a hybrid approach, combining our cloud migration with the retention of certain applications, dedicated data centers, intermediary migration environments, allowing for cost effective and secure operations. Those are some of the best practices as far as just how we were going to transform and what that looks like, and not thinking that its a one size fits all and being able to assess your estate, whats best if you have a large in the service of life estate, a legacy estate, as we did where we were legacy with operating modelthe code base on our applications across our entire landscape, it was huge. I think when we first started this journey, a good amount of over 60% of our estate was in the end of the service of a life due to one of one or the other of the things that I just mentioned.

And so instead of trying to tackle those separately, we decided, whats the best way for us to leverage and bring that all together? So being flexible and looking at the art of the possible across your state and what you have to do to address multiple things was also really a great way to look at this as well. Because you and I probably know from experience, just any time that you say that youre going to go back and do something, nine times out of 10, you dont. You do the first tactical thing and it stays that way forever. So it was, for me, a good thing for us to take best practice, to, if we only have to do something once or only have to open up the box once, then lets just open it up once and figure out how much of transformation can we do in one time to keep us at ease, but also to cover as much modernization as we can before we hand them back over to our ops teams.

I think Ive already touched on just CIO buy-in and business buy-in. Those are best practices, some of those things that were softer that I mentioned earlier. Having a governance framework, delivery model restructured for effectiveness, and how do you get things approved and establish those things, again, upfront, having that delivery model to ensure smooth cloud migrations while also ensuring business service continuity and accommodating evolving business requirements?

Because as you know, again, with some of the trends that we talked about or that I mentioned earlier, those trends are things that when you start to go and implement those things, the business change is enormous. And so being able to be flexible to accommodate those, but not being beared down by who needs to approve this, whos making this decision. If you establish those things upfront with a good governance framework and a delivery model that allows for that flexibility and effectiveness, then that was also key and golden.

The collaboration with digital delivery partners. I cant express enough finding great delivery partners. Theres no way that knowledge is known by everyone in your organization. And like I said, given the pace at which technology changes and things are being rolled out, you always need people that are also keeping their fingers on the pulse from learnings and different experiences. And so you can only get that sometimes if you also worked externally with external partners. And we had a couple of them, quite a few of them actually, that proved to be very, very great partners. And we all learned together with several of the others, but Emphasis has been a major partner of ours. We had eight vendor partners to supplement in-house capabilities, and it was great.

Adapting modern ways of working, agility. And agility in its simplest forms, but also just being agile and utilizing agile practices, which will help you move much faster, setting up your squads, those types of things. And then of course, Ill say it again, last one, but communication, communication, communication, and training for sustainability and just continuing to build your knowledge base to be able to continue to support the platforms and the new technology thats coming on board. So those are some of the things that we saw, or thats lessons learned, best practices in general.

Laurel: You mentioned this a little bit earlier, but how critical is talent to that kind of cloud transformation? And what are some of techniquescommunication clearlyfor recruiting and refining talent for adopting cloud technologies?

Keisha: Yeah. Given the fact that, as we talked, theres so many people that are going along this journey, some at the very beginning, some middle, some almost nearing the end. But because of that, the market out there is extremely competitive to get great talent. And then also, just upscaling your talent that you have in the door already. Your existing staff is also critical. So, offering a competitive compensation package, as well as providing training and certification opportunities. Because again, its keeping your employees motivated and keeping them focused on being a hundred percent all in and passionate around what were doing and why were doing it, but also recognizing that people have to enjoy what they do. And the compensation package has to be great. And also, the learning opportunities and promoting a learning culture has to be there because thats what people are looking for. As we see, as people are moving from place to place, in order to retain great talent, in order to attract great talent, all of those offerings need to be there. Theyre important.

Theyre important for the success of any transformation program that youre doing for some of the reasons that I touched on earlier, the pace at which technology moves, as well as the fact that everybody is out doing all of these things to test the waters, and to also create a more sustainable environment, to also create, to be able to get to market faster, to create all the different trends that are happening with people working in different spaces and places across the globe. All of those things, the offerings have to be there to attract that talent. But most of all, also building a diverse and inclusive workforce. And in order to do that, the offering has to be there across the board for people that want to work from home, people that want to work in an office building, people that are doing different things or at different stages and points in their lives. Having that flexibility to offer your employees to retain that great talent is absolutely key and critical for the purposes of the success of your transformation.

Laurel: And then you did mention the importance of working with partners, especially when youre trying to build this collaborative ecosystem. So, what is that like working with partners in this large-scale cloud transformation?

Keisha: Again, you cant know everything, and youre not always going to get all of those things. So thats where you have the extension of, I would say an extension of additional brain power, extension of learning and those things. Leveraging partnerships with educational institutions, collaborating with universities and colleges to establish internships, co-op programs, and recruitment pipelines for cloud related roles. Because again, as you see in universities, technology is key and theyre learning new things. And students coming out of universities, theyre more conscious of all of the things that are going on in the environment, and theyre wanting to work with people that are moving towards making the environment better and the sustainability of that. And the low carbon initiatives that are going on and getting to net zero, believe it or not, those are all things that are known. As soon as I go into recruiting at a university, its the first thing that they ask. Whats really going on with re:Invent?

What do you see and how do you utilize technology to help leverage that? So I think building those partnerships with educational institutions are great, as well as those partnerships with our third-party vendors that weve done as well, because theyre doing some of the same things with getting students, and as well as keeping up with the trends, keeping up their skillsets and capabilities and being able to have that flex staff to flex up and down as necessary. As you go through the ebb and flows of your journey of transformation, things start, stop, and/or increase, and sometimes you need to move at pace, or just the complexities that comes with marrying things with moving off of your legacy estate and still trying to keep that BAU [business as usual] and no downtime for your business. So therefore, all of those things, you cannot know within one organization. You have to look to research and development. And again, the partnerships with the universities, the partnerships with third-party vendors are absolutely critical and key.

Laurel: Youve mentioned sustainability a couple times. So how moving to the cloud and adopting these kinds of emerging technologies actually help BP as a company address the sustainability goals that it may have?

Keisha: We have the reduced environmental impact paired with efficient resource utilization. So, moving to the cloud allowed us to reduce our carbon footprint by transitioning from on-premise data centers to a more energy efficient cloud infrastructure. We are dual cloud company. We use both AWS and Microsoft Azure. And so definitely working with both of them for what theyre doing around the energy efficient cloud infrastructure that theyre pushing and doing, and working with them on all hands of how to measure that. Also whats the projection of what were going to contribute as we continue to move forward and get to nearing the end of our cloud journey. Also enabling us to optimize our energy consumption, like I said, by scaling resources up and down based on demand, driving efficient energy usage, reducing waste, and contributing to our wider BP sustainability goals as well.

There was a time, again, when we were on-prem and we would have large amounts of servers running. And some of those servers were literally less than 50% utilized. But yet theyre still on. Theyre still utilizing energy as well. So this moving to cloud allows us, again, from the optimization perspective of what we consume. Also, low carbon emissions, data-driven sustainability, and enhanced operational efficiency. Moving to the cloud supports and drives our low carbon emission by enabling our company to utilize renewable power sources, so by adopting emerging technologies, such as AI and machine learning. The transformation to cloud allows for us to analyze vast amounts of data, driving our innovation and decision-making power for BPs sustainable initiatives.

And again, this has been huge for us. And being data-driven helps identify opportunities for resource optimization, emissions reduction, as well as environmental impact mitigation. So in the data space, large opportunities there. And then also, theres just a continuous improvement in innovation, and having or utilizing cloud platforms provides the necessary computational power and tools to implement advanced analytics, predictive modeling, as well as simulation techniques, which also enables us to continuously improve our sustainability performance. And it also allows for new solutions to be provided, as well as to contribute to all of the industry-wide sustainability advancements, things that when I get around other CIO tables or other tables with other people that are leading their transformations, we share ideas, we talk about the things that were doing, how were measuring that. And sharing that across the table is really good because, again, you get to also hear some of the things that theyre doing, which gives you some of the ideas of how theyre using technology to continue with the sustainability goals.

So from that standpoint, thats how weve helped to leverage our cloud transformation to help with our sustainability aspirations of getting to net zero.

Laurel: Yeah, thats quite significant. Youve outlined major cloud trends like going cloud native that youre seeing today. So, what are some of the cloud-enabled technologies or use cases that youre really excited to see emerge in the next three to five years?

Keisha: So, I would say that Im really excited abouttheres quite a few, so Ill try and limit it. So edge AI. I think that the convergence of edge computing and AI presents an exciting opportunity for the real-time data, a real-time low latency processing and decision making at the network edge, which is extremely critical for us, given all of the platforms, rigs, that we have out across the globe. That is absolutely key. And Im excited about that because this technology helps to enable and develop our innovative applications in our industry to optimize the energy consumption of smart grids and enhance predictive maintenance and our operations. So for me, edge AI is really one that Im excited about. Also, quantum computing. It has the potential to solve complex problems and perform computations that are currently infeasible for classical computers. So in the next three to five years, Id expect to see significant progress in quantum computing technology, which has the potential to revolutionize the way we approach computational challenges and drive innovation across multiple sectors of our business, but multiple sectors of the energy industry in general.

And then Ill probably think of a couple more. I would say things have progressed quite far along in this space, but IoT and integration and analytics in that space, the proliferation of IoT devices continues to generate massive volumes of data. So cloud platforms will play a crucial role in processing, analyzing, and extracting meaningful insights from this data. In the next few years, and currently even today, I think we see further advancements in cloud based IoT integration analytics, as well as enabling US or other organizations to harness the full potential of IoT data, which will drive smarter decision making and predictive maintenance, as well as asset optimization and automation, or optimization rather. So I just think from an IoT perspective, again, big driver. Weve been doing digital twins, weve been doing quite a few things within our platforms and just within our production business. Those are some of the three that really excite me. And then of course, theres augmented reality, but I wont go into that. But theres a few things that are coming along that really excite us and will be driving our business forward.

Laurel: Fantastic. Keisha, thank you so much for joining us today on the Business Lab.

Keisha: Thank you for having me.

Laurel: That was Keisha Garcia, the vice president of Digital Foundations at BP, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review, overlooking the Charles River.

Thats it for this episode of Business Lab. Im your host, Laurel Ruma. Im the director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology. And you can also find us in print on the web and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.

This show is available wherever you get your podcasts. If you enjoyed this episode, we hope youll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Giro Studios. Thanks for listening.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Reviews editorial staff.

View post:
Migrating to the cloud transforms business - MIT Technology Review

Read More..

What Is PHP Hosting – Robots.net

Skip to content

TECHNOLOGYtech

Welcome to the world of PHP hosting! If youre new to hosting and wondering what PHP hosting is all about, youve come to the right place. PHP hosting is a type of web hosting that specifically supports websites and applications developed using the PHP programming language.

PHP is one of the most popular server-side scripting languages used for building dynamic websites and web applications. It offers a wide range of functionalities and is highly customizable, making it a favorite choice for developers around the globe. PHP hosting providers offer servers specifically optimized to run PHP applications seamlessly, ensuring optimal performance and compatibility.

In this article, we will delve into the intricacies of PHP hosting, understand how it works, explore its benefits, and discuss the different types of hosting options available. Additionally, we will provide insights on the essential features to look for when selecting a PHP hosting provider.

So, whether youre a web developer looking to host your PHP-based projects or a business owner in need of a reliable hosting solution, this article will serve as your comprehensive guide to everything you need to know about PHP hosting.

PHP hosting refers to a type of web hosting service that specifically caters to websites and applications built using the PHP programming language. PHP, which stands for Hypertext Preprocessor, is a widely used server-side scripting language that is designed for web development. It allows developers to create dynamic and interactive web pages by embedding PHP code within HTML code.

PHP hosting providers offer servers that are optimized to run PHP applications smoothly. These servers are pre-configured with the necessary software and tools to support PHP scripts, ensuring maximum compatibility and performance. With PHP hosting, you can host a wide range of PHP-based websites and applications, including blogs, e-commerce platforms, content management systems, forums, and more.

One of the key advantages of PHP hosting is its versatility and scalability. PHP supports various databases, such as MySQL, PostgreSQL, and SQLite, making it easy to integrate and interact with data. Additionally, PHP is compatible with different operating systems, including Windows, macOS, and Linux, allowing developers to choose the platform that best suits their needs.

Another significant benefit of PHP hosting is its vast community and extensive documentation. PHP has a large and active community of developers who continuously contribute to its development and provide support through forums, tutorials, and online resources. This means that if you encounter any issues or need assistance, there are abundant resources available to help you find a solution quickly.

Furthermore, PHP hosting providers often offer additional features and tools that are specifically tailored for PHP applications. These can include advanced caching mechanisms, automatic script installers, version control systems, and support for popular PHP frameworks like Laravel and Symfony. These features not only enhance the performance and security of your PHP application but also simplify the development and deployment process.

In summary, PHP hosting is a hosting service that caters to websites and applications developed using the PHP programming language. It provides optimized servers, extensive community support, and additional features that are specifically designed to enhance PHP application performance and development. If you have a PHP-based project and want a reliable and efficient hosting solution, PHP hosting is the way to go.

PHP hosting works by providing a server environment that is specifically optimized to support and execute PHP scripts. When you choose a PHP hosting provider, they allocate server resources, such as storage space, processing power, and memory, to host your PHP-based website or application.

When a user requests your PHP website or application, the server processes the PHP code, along with any necessary database queries or file operations, and generates an HTML page dynamically. This HTML page is then sent back to the users web browser, which renders it as a fully functional web page.

To enable PHP execution on the server, PHP hosting providers typically use a web server software such as Apache or Nginx, which acts as a bridge between the server and the clients web browser. These web servers have modules or extensions specifically designed to handle PHP code and process it server-side.

PHP hosting also requires a PHP runtime environment, which includes a PHP interpreter or engine that executes the PHP code. The most commonly used PHP runtime environment is PHP-FPM (FastCGI Process Manager), a highly efficient and fast PHP implementation that improves the performance of PHP applications.

To manage PHP hosting, hosting providers often use control panels such as cPanel or Plesk. These control panels give you access to various tools and features to manage your PHP applications, such as the ability to upload and manage files, configure databases, set up email accounts, and monitor website statistics.

PHP hosting providers also offer support for different versions of PHP, allowing you to choose the version that your website or application is compatible with. It is essential to keep your PHP version up to date to ensure compatibility with the latest security patches, features, and improvements.

Overall, PHP hosting works by providing a server environment that can interpret and execute PHP code. It uses specialized web server software, PHP runtime environments, and control panels to manage the hosting process efficiently. This allows your PHP-based website or application to run smoothly and deliver dynamic content to users browsers.

PHP hosting offers several benefits that make it an attractive choice for hosting PHP-based websites and applications. Whether you are a developer or a business owner, here are some key advantages of PHP hosting:

In summary, PHP hosting offers compatibility, versatility, scalability, and affordability for hosting PHP-based websites and applications. The large developer community, extensive documentation, and availability of frameworks and libraries make PHP hosting a convenient and efficient choice for developers and businesses alike. By choosing PHP hosting, you can take advantage of these benefits and ensure that your PHP project is running smoothly and efficiently.

When choosing a PHP hosting provider, its essential to consider certain features that can enhance the performance, security, and overall experience of hosting your PHP-based website or application. Here are some key features to look for:

When evaluating PHP hosting providers, its essential to consider how these features align with your specific project requirements. By selecting a provider that offers the right combination of features, you can ensure that your PHP application runs smoothly, securely, and efficiently.

When it comes to PHP hosting, there are several different types of hosting options available. Each type offers unique features and caters to specific needs. Here are some of the most common types of PHP hosting:

When choosing the right type of PHP hosting, consider factors such as your budget, website traffic, scalability requirements, customization needs, and level of technical expertise. Understanding the differences between these hosting types will help you make an informed decision and select the most suitable PHP hosting option for your project.

Choosing the right PHP hosting provider is crucial for ensuring the success and smooth operation of your PHP-based website or application. With numerous hosting providers available, its essential to consider several factors before making a decision. Here are some key considerations to help you choose the right PHP hosting provider:

Consider your specific project requirements and prioritize the features that align with your needs. Its always recommended to start with a reputable hosting provider that offers good customer support, reliable performance, and robust security measures. By carefully selecting the right PHP hosting provider, you can ensure a seamless hosting experience and set your PHP project up for success.

In conclusion, PHP hosting is a specialized hosting solution designed specifically for PHP-based websites and applications. It provides an optimized environment to run PHP scripts, ensuring maximum performance and compatibility. With a variety of hosting options available, such as shared hosting, VPS hosting, dedicated hosting, cloud hosting, managed hosting, reseller hosting, and WordPress hosting, you can choose the one that best suits your specific needs and budget.

When selecting a PHP hosting provider, it is essential to consider factors such as reliability, uptime guarantees, server performance, customer support, security measures, scalability options, and pricing. By evaluating these factors, you can ensure that your PHP project is hosted on a reliable and secure platform that meets your performance and customization requirements.

Remember to consider the specific features that are important for PHP hosting, such as support for the latest PHP version, database compatibility, developer tools, and user-friendly control panels. These features can significantly impact your development workflow and the performance of your PHP application or website.

Additionally, keep in mind that the success of your PHP hosting experience depends on choosing a reputable and reliable hosting provider. Read reviews, gather recommendations, and consider the providers reputation in the industry before making a decision.

Overall, PHP hosting offers numerous benefits, including compatibility, versatility, scalability, cost-effectiveness, and a vast community of developers. By selecting the right PHP hosting provider and leveraging the features and resources they offer, you can ensure the smooth operation and success of your PHP-based projects.

Read this article:
What Is PHP Hosting - Robots.net

Read More..

What Will the Next Tech Rebellion Look Like? Ask the Luddites – Slashdot

In 1811 working men felt threatened by the arrival of wooden, waterpowered looms. And yet "The Luddite rebellion came at a time when the working class was beset by a confluence of crises that today seem all too familiar..." writes Los Angeles Times technology columnist Brian Merchant. In an upcoming book called Blood in the Machine, he writes that "amid it all, entrepreneurs and industrialists pushing for new, dubiously legal, highly automated and laborsaving modes of production."

Fast Company has an excerpt from the book asking whether history is now repeating itself. Its headline? "A new tech rebellion is taking shape. What we can learn from the Luddites."The reason that there are so many similarities between today and the time of the Luddites is that little has fundamentally changed about our attitudes toward entrepreneurs and innovation, how our economies are organized, or the means through which technologies are introduced into our lives and societies. A constant tension exists between employers with access to productive technologies, and the workers at their whims...

The biggest reason that the last two hundred years have seen a series of conflicts between the employers who deploy technology and workers forced to navigate that technology is that we are still subject to what is, ultimately, a profoundly undemocratic means of developing, introducing, and integrating technology into society. Individual entrepreneurs and large corporations and nextwave Frankensteins are allowed, even encouraged, to dictate the terms of that deployment, with the profit motive as their guide. Venture capital may be the radical apotheosis of this mode of technological development, capable as it is of funneling enormous sums of money into tech companies that can decide how they would like to build and unleash the products and services that shape society.

Take the rise of generative AI... Among other things, the author argues that the unending writer's strike in Hollywood illustrates "the hunger that executives have for automating even creative work, and the lengths to which their workers will go to have some say in that disruption."

And they ultimately conclude that in the end the "disrupted lives" will include more than gig workers...

Thanks to Slashdot reader tedlistens for sharing the article.

See the original post here:
What Will the Next Tech Rebellion Look Like? Ask the Luddites - Slashdot

Read More..

Exploring Decentralized Messaging and Storage in Web 3.0 … – The Coin Republic

The growth of blockchain-based ecosystems requires specialized protocols for decentralized alternatives to traditional centralized messaging and file storage. Whisper and Swarm provide peer-to-peer solutions tailored for the Web 3.0 technology stack and its unique infrastructure needs.

We gain useful insights into developing robust communication and data handling for next-generation decentralized applications by investigating their technical architectures and real-world applications. As blockchain adoption accelerates, purpose-built protocols like Whisper and Swarm will become increasingly vital in architecting serverless dApps beyond the vulnerabilities of legacy systems.

Analyzing these tools for encrypted messaging, distributed file storage, and content delivery sheds light on the possibilities for building censorship-resistant and fully decentralized user experiences.

The Whisper Messaging Protocol refers to a communication framework designed to facilitate secure and private messaging within decentralized networks. Operating within blockchain technology and peer-to-peer networks, the Whisper protocol aims to enable discreet communication between participants by utilizing advanced encryption and a decentralized architecture.

In essence, Whisper serves as a means of exchanging messages while maintaining high levels of confidentiality and data privacy. It achieves this by employing various encryption techniques that ensure only the intended recipients can decrypt and access the messages.

This protocol is particularly relevant in environments where participants seek to communicate sensitive information without compromising their privacy or the integrity of the exchanged data. By offering a secure messaging solution, the Whisper Messaging Protocol contributes to the broader goal of fostering trust and confidentiality within decentralized networks, enhancing their usability for various applications beyond financial transactions.

Key features include:

Whisper operates at Layer 2 off-chain for scalability, using the base blockchain as a Proof-of-Work anchor. The messaging is asynchronous, with messages dropping if recipients are offline. The protocol provides the underlying p2p plumbing for communication between dApp users and ecosystem participants.

Under the hood, Whisper follows a publish-subscribe pattern. Senders broadcast messages to an overlay network as topics without knowing the recipients. Nodes relay messages for a time-to-live (TTL) period. Subscribers monitor specific topics to receive related messages.

Messages have an expiry envelope for metadata like TTL and topic and an encrypted payload envelope with the sender-generated symmetric key. Topic strings are mixed through a bloom filter to obfuscate interests. Whisper is agnostic to message content. The API provides ultimate flexibility for dApp communication needs.

Whisper facilitates a variety of decentralized communication applications, including:

Swarm for Storage refers to utilizing a decentralized storage system within blockchain and distributed ledger technologies. Swarm is designed to provide a secure and efficient means of storing and retrieving data in a decentralized and peer-to-peer network.

Swarm for Storage implies using Swarm as a solution for data storage needs within decentralized applications (DApps) or blockchain ecosystems. Unlike traditional centralized storage services, Swarm operates on a network of nodes, with data distributed across multiple locations. This approach enhances data redundancy, security, and availability, making it particularly valuable for applications that require robust and tamper-resistant data storage.

Swarm offers a way to store data and incentivizes network participants to provide storage space and bandwidth in exchange for cryptocurrency rewards. Ultimately, Swarm for Storage represents a decentralized, blockchain-based alternative to conventional cloud storage solutions, aligning with the principles of security, data integrity, and censorship resistance often associated with blockchain technology.

Key features include:

This decentralized approach aims to build an alternative to the centralized cloud hosting providers that dominate today. The modular architecture integrates incentives and crypto-economic mechanisms tailored for different dApps.

Behind the scenes, files get split into variable-sized chunks encrypted via ECIES symmetric streams. Manifests track metadata like version history and content hashes. Retrieval happens by requesting chunks from the network via manifest syncing.

Nodes get incentivized to provide reliable long-term file storage via service fees and swarm rewards. Erasure coding replicates data across multiple nodes for redundancy against outages. Fetching is optimized by caching popular content on edge nodes.

With robust decentralized storage and hosting, Swarm opens up new dApp possibilities:

As blockchain technology proliferation continues, purpose-built protocols like Whisper and Swarm fill critical gaps in the Web 3.0 decentralized stack. They provide robust peer-to-peer messaging, storage, and streaming solutions tailored to the unique needs of emerging decentralized ecosystems.

These protocols unlock new possibilities by empowering developers to build serverless dApps beyond the vulnerabilities of centralized systems. Their scalable support for encrypted communication, distributed file sharing, and censorship-resistant content delivery will only grow more indispensable as blockchain adoption accelerates. Analyzing their technical underpinnings and real-world applications offers a valuable perspective on the future of resilient and decentralized network architectures for Web 3.0 and beyond.

Go here to see the original:
Exploring Decentralized Messaging and Storage in Web 3.0 ... - The Coin Republic

Read More..

Bare Metal Cloud Market Size, Status, Top Emerging Trends, Growth and Business Opportunities 2026 – Benzinga

"The Best Report Benzinga Has Ever Produced"

Massive returns are possible within this market! For a limited time, get access to the Benzinga Insider Report, usually $47/month, for just $0.99! Discover extremely undervalued stock picks before they skyrocket! Time is running out! Act fast and secure your future wealth at this unbelievable discount! Claim Your $0.99 Offer NOW!

Advertorial

"IBM (US), Oracle (US), Lumen (US), Internap (US), Rackspace (US), AWS (US), Dell (US), Equinix (US), Google (US), Microsoft (US), Alibaba Cloud (China), Scaleway (France), Joyent (US), HPE (US), OVHcloud (France), Limestone Networks (US), Media Temple (US), Bigstep (UK), Zenlayer (US), and phoenixNAP (US)."

Bare Metal Cloud Market by Service Type (Compute, Networking, Database, Security, Storage, Professional, and Managed), Organization Size, Vertical (BFSI, Manufacturing, Healthcare and Life Sciences, and Government), and Region - Global Forecast to 2026

Enter your email and you'll also get Benzinga's ultimate morning update AND a free $30 gift card and more!

The global Bare Metal Cloud Market size is expected to grow at a Compound Annual Growth Rate (CAGR) of 24.1% during the forecast period, to reach USD 16.4 billion by 2026 from USD 4.5 billion in 2020. Major factors that are expected to drive the growth of the bare metal cloud market include increasing critical need for reliable load balancing of data-intensive and latency-sensitive operations, necessity of non-locking compute and storage resources, increased usage of IoT platforms and devices to manage workload with high performance computing, elimination of overheads caused due to adherence to compliance, convergence of technologies such as AI, IoT, and analytics, and advent of fabric virtualization.

Download PDF Brochure:https://www.marketsandmarkets.com/pdfdownloadNew.asp?id=153940759

The market is expected to be driven by the growing need for reliable load balancing of data-intensive and latency-sensitive operations

Load balancing improves the distribution of additional workloads across the bare metal cloud servers to enable smoother functioning and allocation of resources to multiple processes. Load balancing solutions enable ease in configurability and flexibility to manage traffic and resource usage across server nodes in the real-time end-user environment. Hence, it becomes critical to deploy reliable load balancing operations over the cloud. The bare metal infrastructure vendors primarily focus on offering a single-tenant architecture wherein multiple resources are clubbed together for dedicated instances of data-intensive operations resulting in delivering higher performance. Hypervisors in a virtualized environment consume higher server-side processing power causing a tradeoff for enterprises to adjust between latency hit operations and low-cost cloud compute infrastructure. The custom-based lightweight hypervisors have been offered as an alternative to bare metal offerings since public cloud owners have been creating dedicated instances offering a greater share of resources to clients whose data migration costs from the public cloud are significant.

Increased necessity of non-locking compute and storage resources

One of the core issues that remains with public cloud workloads is sharing of resources with multiple processes making the throughput of the process relatively less. Data-intensive operations and high-performance workloads require dedicated storage and compute resources in a highly secured environment to enable them to achieve desired results. Additionally, bare metal cloud services offer a flexible pay-per-use option for the efficient utilization of compute and storage services, and ease in termination of SLA without incurring significant infrastructural costs make it a viable option for enterprises to deploy. Sharing of compute resources and the occurrence of a deadlock situation among certain processes are also a few of the critical issues faced by organizations in their daily operations. Bare metal cloud servers address these issues through offering non-locking of compute and storage resources to deliver performance-intensive workloads in definitive complexity with higher throughput yield.

Request Sample Pages:https://www.marketsandmarkets.com/requestsampleNew.asp?id=153940759

Unique Features in theBare Metal Cloud Market:

With dedicated physical servers available instead of virtualized instances, the Bare Metal Cloud Market distinguishes out for its distinctive characteristics. This essential quality guarantees reliable, high-performance computing resources and removes the difficulties posed by multi-tenant virtual environments.

Users gain from having complete control and customization over their bare metal servers since they may modify the hardware setups, operating systems, and software stacks to suit particular needs. For workloads and applications that require a lot of resources, this flexibility is very beneficial.

Bare Metal Cloud is a desirable option for businesses with strict compliance and security requirements because security and isolation are of the utmost importance there. In comparison to shared virtual environments, dedicated servers pose less security threats, resulting in a safe working environment.

Hidden gems are waiting to be found in this market! Don't miss the Benzinga Insider Report, typically $47/month, now ONLY $0.99! Uncover incredibly undervalued stocks before they soar! Limited time offer! Secure your financial success with this unbeatable discount! Grab your 0.99 offer TODAY!

Advertorial

Its applicability for High-Performance Computing (HPC) workloads, scientific research, and applications requiring significant computational capacity is one of its distinguishing qualities. Bare Metal Clouds maximise the performance of specialised hardware, delivering outstanding results for taxing jobs.

Bare metal servers are known for their predictable performance, which provides a constant computing environment without resource contention. For applications with strict performance requirements, where performance consistency is crucial, this predictability is a considerable advantage.

Major Highlights of theBare Metal Cloud Market:

Major aspects that define the Bare Metal Cloud Market are numerous. It primarily provides dedicated physical servers, in sharp contrast to virtualized instances, guaranteeing excellent performance and isolation. Users gain from total flexibility and control over these servers, which enables customised setups to fulfil specific application requirements. In particular, resource-intensive jobs benefit from this level of flexibility.

The dedicated nature of bare metal servers lowers security concerns, making them an appealing option for organisations with strict compliance and security requirements. Security is a top consideration in this industry. Bare Metal Clouds also excel in the field of High-Performance Computing (HPC), utilising the full capacity of specialised hardware to provide outstanding performance.

Bare metal servers are known for their stable and predictable performance since virtual instances compete for resources with them. Applications with high performance demands depend on this reliability greatly. By enabling speedy deployment without the usual delays in hardware acquisition, rapid provisioning significantly increases the desirability of these servers.

Another important aspect is scalability, which enables companies to grow horizontally by putting in more physical servers to handle changing workloads. In use cases where performance and customisation are critical, such as database hosting, big data analytics, gaming infrastructure, and content delivery networks, bare metal clouds are designed to meet the needs of the customers.

Inquire Before Buying:https://www.marketsandmarkets.com/Enquiry_Before_BuyingNew.asp?id=153940759

Top Key Companies in theBare Metal Cloud Market:

The bare metal cloud market includes major vendors, such as IBM (US), Oracle (US), Lumen (US), Internap (US), Rackspace (US), AWS (US), Dell (US), Equinix (US), Google (US), Microsoft (US), Alibaba Cloud (China), Scaleway (France), Joyent (US), HPE (US), OVHcloud (France), Limestone Networks (US), Media Temple (US), Bigstep (UK), Zenlayer (US), and phoenixNAP (US). The major players have implemented various growth strategies to expand their global presence and increase their market shares. Key players such as IBM, Oracle, Lumen, Internap and Rackspace have majorly adopted many growth strategies, such as new product launches, acquisitions, and partnerships, to expand their product portfolios and grow further in the bare metal cloud market.

IBM has a strong presence in the bare metal cloud market. It focuses on strengthening its product portfolio by launching new and advanced solutions in five categories: analytics, data, cloud, security, and AI. IBM helps customers streamline business processes and enhance data-driven decision-making capabilities. It offers a broad product portfolio that includes Analytics, Intelligent Automation, Cloud Computing, Blockchain, Business Operations, IT Infrastructure, Mobile Technology, Security, Software Development, and Supply Chain Management. Under the product category of bare metal cloud, IBM offers fully dedicated servers to provide maximum performance and secure, single tenancy. It caters to various verticals, including automotive, telecommunications, financial services, health, aerospace and defense, insurance, life sciences, and retail. It nurtures an ecosystem of global business partners operating in more than 170 countries. IBM research constitutes the largest industrial research organization in the world, with 12 labs across 6 continents spread across the Americas, Europe, MEA, and APAC.

Rackspace is a leading provider of bare metal cloud servers. It focuses on strengthening its product portfolio by launching new and advanced solutions in the cloud, applications, data, and security. Its bare metal private cloud is a dedicated, secure environment that offers customized compute, storage, and connectivity. Its bare metal services offer servers on an on-demand basis, thereby following a pay-as-you-go pricing model. It provides professional, consulting and advisory, and managed services to its customers. The company caters its solutions across the globe to various verticals, including financial, healthcare, manufacturing, education, oil and gas, media and entertainment, automotive and transportation, food and beverage, travel and hospitality, and retail. It also caters to a broad customer base of about 117,000 across 120 countries. It operates globally through its offices based in the Americas, Europe, MEA, and APAC.

Media Contact Company Name: MarketsandMarkets Research Private Ltd.Contact Person: Mr. Aashish MehraEmail: Send EmailPhone: 18886006441Address:630 Dundee Road Suite 430City: NorthbrookState: IL 60062Country: United StatesWebsite: https://www.marketsandmarkets.com/Market-Reports/bare-metal-cloud-market-153940759.html

Press Release Distributed by ABNewswire.comTo view the original version on ABNewswire visit: Bare Metal Cloud Market Size, Status, Top Emerging Trends, Growth and Business Opportunities 2026

Massive returns are possible within this market! For a limited time, get access to the Benzinga Insider Report, usually $47/month, for just $0.99! Discover extremely undervalued stock picks before they skyrocket! Time is running out! Act fast and secure your future wealth at this unbelievable discount! Claim Your $0.99 Offer NOW!

Advertorial

2023 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.

More:
Bare Metal Cloud Market Size, Status, Top Emerging Trends, Growth and Business Opportunities 2026 - Benzinga

Read More..

Prediction of DDoS attacks in agriculture 4.0 with the help of prairie … – Nature.com

Here, we take a look at the IDSNet model, which was developed to identify cyber-attacks in Agriculture 4.0 and makes use of a one-dimensional convolutional neural network and the PDO.

The agriculture 4.0 network model is provided, which is composed of the following three layers: (1) agricultural sensors; (2) fog computing; and (3) cloud computing. The agriculture industry uses data gathered by drones and other Internet of Things sensors. When certain thresholds are met in the data collected by the agricultural sensor layer, the actuators below are triggered. To ensure that Internet of Things (IoT) devices always have access to power, new energy technologies and smart grid design are implemented in the sensor layer. Every fog node has an embedded deep learning intrusion detection system. To perform analysis and machine learning algorithms, the IoT data is sent from the agricultural sensors layer to the fog computing layer, while cloud computing nodes offer storage and end-to-end services. Typically, intrusion detection systems that rely on deep learning to process alerts send their processing to fog nodes. We assume that there is a malicious party intent on disrupting the network's operations in order to compromise food security, the effectiveness of the agri-food supply chain, and output.

There are a total of 50,063,112 records in the CIC-DDoS2019 dataset29, consisting of 50,06,249 rows related to DDoS assaults and 56,863 rows related to normal traffic. with 86 characteristics in each row. Table 1 presents a summary of the dataset's attack statistics throughout both training and testing. SNMP and SSDP are used in the attacks.

In a reflection-based DDoS assault known as an "NTP-based attack," an adversary hijacks a server running the Network Time Protocol (NTP) protocol to send an overwhelming amount of traffic across the User Datagram Protocol (UDP) to a single target. The target and its supporting network infrastructure may become inaccessible to legitimate traffic as a result of this attack.

An attack that leverages the Domain Name System (DNS) to flood a target IP address with resolution requests is called a reflection-based DDoS assault.

By sending queries to a publicly accessible vulnerable LDAP server, an attacker can generate massive (amplified) responses, which are then reflected to a target server, resulting in a DDoS attack.

Reflection-based (DDoS) attacks, or "MSSQL-based attacks," include the attacker forging an IP address to make programmed requests seem to originate from the targeted server while really exploiting.

NetBIOS-based attacks are a kind of reflection-based denial-of-service attack in which the attacker delivers forged "Name Release" or "Name Conflict" signals to the target system, causing it to reject any and all incoming NetBIOS packets.

To jam the target's network pipes, an SNMP-based assault will produce attack volumes in the hundreds of gigabits per second using the Simple Network Management Protocol (SNMP).

The reflection-based SSDP attack is a DDoS attack in which the attacker uses UPnP protocols to deliver a flood of traffic to the intended victim.

This kind of attack uses IP packets carrying UDP datagrams to deliberately saturate the network connection of the victim host and cause it to crash.

To compromise a Web server or application, a WebDDoS-based attack will use seemingly innocuous HTTP GET or POST requests as a backdoor.

Syn-based attacks use the standard TCP three-way handshake and respond with an ACK to exhaust the victim server's network resources and render it unusable.

As its name suggests, an attack based on the TFTP protocol uses online TFTP servers to get access to sensitive information. An attacker makes a default request for a file, and the victim TFTP server delivers the information to the attacker's target host.

An example of this is the PortScan-based attack, which is similar to a network security audit in that it scans the open ports of a target computer or the whole network. Scanning is performed by sending queries to a distant site in an effort to learn what services are available there.

We generate three datasets, respectively titled "Dataset 13 class," to examine the efficacy of learning approaches in binary and multi-class classification. Tables 2 and 3 describe the statistics for each dataset regarding attacks during training and testing, respectively. Table 4 describes the attack categories in Dataset 7 class.

A novel testbed for an IIoT network, the TON IoT dataset30 includes information on the network, the operating system, and telemetry. Seven files containing telemetry data from Internet of Things and industrial Internet of Things sensors are given in Table 5. Here's what you may expect to find within these files:

File 1: Train Test IoT Weather includes the following conditions: Normal (35,000 rows), DDoS (5000 rows), injection (50,000 rows), Password (50,000 rows), and backdoor IoT data from a networked weather sensor, including temperature, pressure, and humidity values, are shown in the file.

There are Normal (35,000 rows), DDoS, and Injection (2902 rows) in File 2 "Train Test IoT Fridge" (2942 rows). The file contains information on the sensor's temperature readings and environmental circumstances as they pertain to the Internet of Things.

Train Test IoT Garage Door.txt has the following categories: normal (10,000 rows), ransomware (5804 rows). If you have a networked door sensor, this file will show you whether or not the door is open or closed.

File 4 "Train Test IoT GPS Tracker" has the following categories and numbers of rows: Normal (35,000), DDoS (5,000), Injection (5,000), Password (5,000), Backdoor (5,000), Ransomware (2,833 rows), XSS (577 rows), and Scanning (550 rows). Data from a networked GPS tracker sensor is shown in the file, including its latitude and longitude readings, as an example of Internet of Things (IoT) data.

You'll find the following data types in File 5: "Train Test IoT Modbus: Normal (35,000 rows), Injection (5,000 rows), Password (5,000 rows), Backdoor. IoT data file containing Modbus function code for reading an input register.

There are 70,000 rows of normal data, 10,000 rows of DDoS data, 10,000 rows of injection data, 10,000 rows of password data, 10,000 rows of backdoor data, 4528 rows of ransomware data, 898 rows of XSS data, and 70,000 rows of scanning data in File 6 "Train Test IoT Motion Light" (3550 rows). In the file, we can see the Internet of Things data for a switch that may either be on or off.

Included in File 7 "Train Test IoT Thermostat" are the following categories of data: Normal (35,000 rows), Injection (5,000 rows), Password (5,000 rows), Backdoor The file contains data from the Internet of Things that represents the temperature as it is right now according to a networked thermostat sensor.

The current concept took some cues from CNN's practical uses. However, this model just needs a single raw input, and its reduced number of layers helps save time during training.

The current concept takes some cues from CNN's practical uses. However, this model only needs a single raw input, and its reduced number of layers helps save time during training. Figure1 depicts the design process as it was carried out. The first step was to fine-tune the training and optimization methods as well as the layer count, filter size, and filter amount. It was also necessary to tweak the network's hyper settings. These included the training lot size, learning rate, number of training cycles (epochs), and number of training signals (batch size). Table 6 provides the suggested values. And second, a CNN structure was built, and it's laid out in Table 6. The number of layers in the model network determines the number and size of filters available in each convolutional layer. In this situation, the network layout shown by the bold fonts in the table below performed the best after being optimised by altering a few stated choices in the literature. Figure1 depicts the filter setup and internal structure of the kernel.

Internal structure of IDSNet.

The network employs algorithms to discover and prioritise the most relevant aspects of raw data for mining purposes. To achieve this goal, we apply a convolution process (convolutive layer) to the input data, resulting in a longer vector from which we use a maximum clustering criterion (max-pooling layer) to extract the most representative features. Table 6 shows that the same steps are performed four times with a different number of kernels added to each Convolutive plus Max-Pooling set. This adjustment is made so that feature maps may be generated that accurately depict the signals' non-linearity. Using a filter with a duration of three samples and a sliding pass of one sample, the first three values of a feature map are generated in sequence. The procedure is performed on each convolutional layer. It is possible to fine-tune this procedure by adjusting the number and size of filters (u), as well as the window's sliding factor (stride). Since the output vector of the final convolutional layer is the input vector of the fully connected layer, only its map length needs to be calculated during network design. The PDO method is used to fine-tune the IDSNet's hyper-parameters like momentum, learning rate, and epochs, as shown below.

The following were assumed to facilitate the development of models for the proposed PDO:

Each prairie dog belongs to one of the m coteries in the colony, and there are n prairie dogs in each coterie. (i) Prairie dogs are all the same and can be classified into m subgroups, (ii) Each coterie has its own ward inside the colony, which represents the search area for the corresponding issue.

Nesting activities generate an increase from ten burrow openings per ward to as many as one hundred. Both an antipredator call and a new food supply (burrow construction) call are used. It's only individuals of the same coterie that engage in foraging and burrow construction activities (exploration), communication, and anti-predator (exploitation) actions. Exploration and exploitation are repeated m (the number of coteries) times since other coteries in the colony undertake the same tasks at the same time and the whole colony or problem space has been partitioned into wards (coteries).

Like other population-based algorithms, the prairie dog optimization (PDO) relies on a random initialization of the placement of the prairie dogs. The search agents are the prairie dog populations themselves, and each prairie dog's position is represented by a vector in d-dimensional space.

Each prairie dog (PD) is a member of one of m coteries, where n is the total number of PDs. Because prairie dogs live and work together in groups called "coteries," each prairie dog's position within a given coterie may be uniquely determined by a vector. Positions of all coteries (CT) in a colony are shown by the matrix in Eq.(1):

$$CT = left[ {begin{array}{*{20}c} {CT_{1,1} } & {CT_{1,2} } & {begin{array}{*{20}c} cdots & {CT_{1,d - 1} } & {CT_{1,d} } \ end{array} } \ {CT_{2,1} } & {CT_{2,2} } & {begin{array}{*{20}c} cdots & {CT_{2,d - 1} } & {CT_{2,d} } \ end{array} } \ {begin{array}{*{20}c} vdots \ {CT_{m,1} } \ end{array} } & {begin{array}{*{20}c} vdots \ {CT_{m,2} } \ end{array} } & {begin{array}{*{20}c} {begin{array}{*{20}c} {CT_{i,j} } \ cdots \ end{array} } & {begin{array}{*{20}c} vdots \ {CT_{m,d - 1} } \ end{array} } & {begin{array}{*{20}c} vdots \ {CT_{m,d} } \ end{array} } \ end{array} } \ end{array} } right]$$

(1)

When talking about a colony, the jth dimension of the ith coterie is denoted as CT (i,j). All of the prairie dogs in a coterie may be found at the coordinates given by Eq.(2):

$$PD = left[ {begin{array}{*{20}c} {PD_{1,1} } & {PD_{1,2} } & {begin{array}{*{20}c} cdots & {PD_{1,d - 1} } & {PD_{1,d} } \ end{array} } \ {PD_{2,1} } & {PD_{2,2} } & {begin{array}{*{20}c} cdots & {PD_{2,d - 1} } & {PD_{2,d} } \ end{array} } \ {begin{array}{*{20}c} vdots \ {PD_{n,1} } \ end{array} } & {begin{array}{*{20}c} vdots \ {PD_{n,2} } \ end{array} } & {begin{array}{*{20}c} {begin{array}{*{20}c} {PD_{i,j} } \ cdots \ end{array} } & {begin{array}{*{20}c} vdots \ {PD_{n,d - 1} } \ end{array} } & {begin{array}{*{20}c} vdots \ {PD_{n,d} } \ end{array} } \ end{array} } \ end{array} } right]$$

(2)

where (PD left( {i,j} right)) stands for the jth dimension of the ith prairie dog in a pack and nm is the total number of dogs in the pack. Equations3 and 4 depict the uniform distribution used to assign each prairie dog to its coterie.

$$CT_{i,j} = Uleft( {0,1} right) times left( {UB_{j} - LB_{j} } right) + LB_{j}$$

(3)

$$PD_{i,j} = Uleft( {0,1} right) times left( {ub_{j} - lb_{j} } right) + lb_{j}$$

(4)

where (UB_{j}) and (LB_{j}) of the optimization problem, (ub_{j} = frac{{UB_{j} }}{m}) and (lb_{j} = frac{{LB_{j} }}{m}), and U(0,1) is a random sum with a uniform distribution among 0 and 1.

By plugging the solution vector into the predefined fitness function, we can get the fitness value for each prairie dog site. To keep track of the results, we may use the array defined by Eq.(5).

$$PD = left[ {begin{array}{*{20}c} {f_{1} ([PD_{1,1} } & {PD_{1,2} } & {begin{array}{*{20}c} cdots & {PD_{1,d - 1} } & {PD_{1,d} ])} \ end{array} } \ {f_{2} ([PD_{2,1} } & {PD_{2,2} } & {begin{array}{*{20}c} cdots & {PD_{2,d - 1} } & {PD_{2,d} ])} \ end{array} } \ {begin{array}{*{20}c} vdots \ {f_{1} ([PD_{n,1} } \ end{array} } & {begin{array}{*{20}c} vdots \ {PD_{n,2} } \ end{array} } & {begin{array}{*{20}c} {begin{array}{*{20}c} cdots \ cdots \ end{array} } & {begin{array}{*{20}c} vdots \ {PD_{n,d - 1} } \ end{array} } & {begin{array}{*{20}c} vdots \ {PD_{n,d} ])} \ end{array} } \ end{array} } \ end{array} } right]$$

(5)

An individual prairie dog's fitness function value is a measure of the quality of food available at a given location, the likelihood of successfully excavating and populating new burrows, and the efficacy of its anti-predator alarm system. The fitness function values array is sorted, and the element with the lowest value is designated the optimal solution to the minimization issue. In addition to the following three, the greatest value is taken into account while designing burrows that help animals hide from predators.

The PDO has four parameters it uses to determine when to switch between exploration and exploitation. The total number of possible cycles is cut in half, with the first half going toward exploration and the second half toward exploitation. There is a causal relationship between the two investigation tactics. on (iter < frac{{max_{iter} }}{4}) and (iter le frac{{max_{iter} }}{4} < iter < frac{{max_{iter} }}{2}), while the two strategies for exploitation are conditioned on (frac{{max_{iter} }}{2} le iter < 2frac{{max_{iter} }}{4} le iter le max_{iter}).

Equation(6) describes how our algorithm updates its location throughout the foraging phase of its exploration phase. The second plan of action is to analyse the digging strength and the quality of the found food sources thus far. The digging power used to create new burrows is calibrated to decrease with time. This limitation aids in controlling the burrowing population. Position updates during tunnel construction are described by Eq.(7).

$$PD_{i + 1,j + 1} = GBest_{i,j} - eCBest_{i,j} times rho - CPD_{i,j} times Levyleft( n right)forall iter < frac{{max_{iter} }}{4}$$

(6)

$$PD_{i + 1,j + 1} = GBest_{i,j} times rPD times DS times Levyleft( n right)forall iter < frac{{max_{iter} }}{4} le iter < frac{{max_{iter} }}{2}$$

(7)

As demonstrated in Eq.(8), where (GBest_{i,j}) is the best global solution so far achieved, (eCBest_{i,j}) assesses the impact of the currently obtained best answer. In this experiment, q is the frequency of the specialised food source alert, which has been set at 0.1 kHz; rPD is the location of a random solution; and (CPD_{i,j}) is defined as the random cumulative impact of all prairie dogs in the colony. The digging strength of the coterie, denoted by DS, varies with the quality of the food supply and is determined at random by Eq.(10). The Levy(n) distribution is recognised to promote more effective and thorough investigation of the search space of a topic.

$$eCBest_{i,j} = GBest_{i,j} times Delta + frac{{PD_{i,j} times meanleft( {PD_{n,m} } right)}}{{GBest_{i,j} times left( {UB_{j} - LB_{j} } right) + Delta }}$$

(8)

$$CPD_{i,j} = frac{{GBest_{i,j} - rPD_{i,j} }}{{GBest_{i,j} + Delta }}$$

(9)

$$DS = 1.5 times r times left( {1 - frac{iter}{{max_{iter} }}} right)^{{left( {2frac{iter}{{max_{iter} }}} right)}}$$

(10)

where r adds the stochastic property to guarantee exploration by taking either1 or+1 as its value depending on whether the current iteration is odd or even, Despite the fact that the prairie dogs are considered to be identical in the PDO implementation, the small number represented by helps explain for these variances.

The point of PDO's exploitation mechanisms is to conduct extensive searches in the promising regions discovered during the exploration phase. Equations(11) and (12) model the two approaches used during this stage. Earlier, we discussed how the PDO toggles between these two tactics. to (frac{{max_{iter} }}{2} le iter < 2frac{{max_{iter} }}{4}) and (3frac{{max_{iter} }}{4} le iter le max_{iter}), respectively.

$$PD_{i + 1,j + 1} = GBest_{i,j} - eCBest_{i,j} times varepsilon - CPD_{i,j} times randforall frac{{max_{iter} }}{2} < iter < 3frac{{max_{iter} }}{4}$$

(11)

$$PD_{i + 1,j + 1} = GBest_{i,j} times PE times randforall 3frac{{max_{iter} }}{4} < iter < max_{iter}$$

(12)

As demonstrated in Eq.(8), where GBest (i,j) is the best global solution so far achieved, eCBest (i,j) assesses the impact of the currently obtained best answer. Equation(8) defines CPD (i,j) as the aggregate influence of all prairie dogs in the colony, where is a tiny integer representing the quality of the food supply. In Eq.(13), PE stands for the predator effect, and rand is a random integer between zero and one..

$$PE = 1.5 times left( {1 - frac{iter}{{Max_{iter} }}} right)^{{left( {2frac{iter}{{max_{iter} }}} right)}}$$

(13)

where (iter) is the current iteration and (Max_{iter}) is the supreme sum of iterations.

Continued here:
Prediction of DDoS attacks in agriculture 4.0 with the help of prairie ... - Nature.com

Read More..

Value investing: Out of favor or out of time? – Chattanooga Times Free Press

Advertisement Advertisement

September 16, 2023 at 12:00 p.m.

by Christopher A. Hopkins

Two generations of investors came of age under a rubric known as value investing. Put simply, the concept involves identifying securities that are currently selling for less than their fair or intrinsic value due to some misperception by market participants. The approach has a certain inherent logical appeal with the added benefit of having worked for most of the time between the 1970s and 2007.

Value investing may be broadly contrasted with an alternative perspective that seeks to identify companies that are expensive now but can expand rapidly to eventually justify a higher price. This perspective, generally called growth investing, has vastly outperformed value over the past 16 years with only brief exceptions. This prolonged reversal begs the question: Is value investing an anachronism, or should we perhaps reconsider how we apply it?

Although the term wasn't yet in use, the concept of value investing traces to the pioneering academic work of Benjamin Graham and David Dodd at Columbia University in the aftermath of the Great Depression.

Visit link:
Value investing: Out of favor or out of time? - Chattanooga Times Free Press

Read More..