Page 703«..1020..702703704705..710720..»

The Cloud has a serious and fragile vulnerability: Access Tokens – Security Boulevard

The October 2023 OKTA support system attack that so far has publicly involved Cloudflare, 1Password and BeyondTrust informs us just how fragile and vulnerable our cloud applications are because they are built using access tokens to authenticate counterparties.

If a valid access token is stolen by a threat actor, most systems dont have the internal defenses built-in to detect that a valid token is now being used by a threat actor, leaving them completely vulnerable. To be clear, reliance on access tokens to authenticate counter parties, is NOT unique to Okta and could happen to any of a myriad of cloud services to application functionality such as email, CRM, accounting/finance are just some examples.

What are access tokens and why do they make our cloud infrastructure and applications so vulnerable? Simple, they are what is known as a bearer token, which means that if you possess the token you have the access rights granted to the original possessor of the token. By design the recipient is required to check if the token is still valid and if so, grants access.

Because of their power, access tokens must be protected at all costs from theft. There are four key aspects to protecting tokens:

Deviate from these simple rules and applications (cloud, hybrid and classical) that rely on access tokens are extremely vulnerable.

Looping back to the recent Okta breach, it boils down to stolen access tokens due to 3 out of the 4 rules being breached. The only rule that wasnt fully breached was number 2, which by public reporting does not appear to have been a mutually authenticated connection.

What is an access token?

Think of an access token being similar to a ticket used to get into a sporting event with some notable differences Presenting a valid sporting ticket to a stadium gate agent allows you to enter the venue (authentication). Once inside the stadium gate, you go to your section, row and seat for that ticket (scope/authorization).

Access tokens, technically called OAUTH tokens, work like tickets with two notable differences: reuse and copy protection.

How are access tokens generated and used?

Without getting down into the weeds of how modern Identity Providers (IDPs) and protocols work, the answer is fairly simple in that an Agent (human or system) presents credentials to an IDP and requests access tokens to a resource (e.g. system, application, database, etc,..). The credentials presented to the IDP can be a username/password augmented with multi-factor authentication, a certificate or other types of secrets. If successfully authenticated, the IDP sends the Agent an access token. To protect access tokens, they should only be sent over properly encrypted channels with the valid time period set as short as possible, though the downside to short time durations means the Agent has to re-authenticate to the IDP more often..

Once an Agent has their access tokens, they present them to resources to gain access. The resource validates the tokens with the IDP to see if they have been revoked or expired and if not, grants access based on the authorization rights associated with the Agent.

How do you protect access tokens?

Given how powerful access tokens are, they must be protected at all costs.

Going back to the four methods of protection listed earlier, the first order of business is to never persistently store access tokens. Avoiding persistent storage greatly reduces the attack surface of a threat actor to get a copy of a token. As a side note, ensuring that the original credentials presented to the IDP get the access tokens in the first place should also not be persisted as well. Instead, applications should make use of Key Management Services or Vaults.

The second task is to ensure that tokens are always sent over secure channels which in the majority of cases means using Transport Layer Security (TLS), Using TLS versions older than V1.2 should be avoided as these are no longer considered secure and have been deprecated.

TLS has two modes of operation: One-way (most common) and mutual.

With one-way TLS the channel has only data privacy and data integrity but misses out on authentication due to the fact that only the server providing the resource has a certificate. While many human based use cases can be sufficiently secured using one-way-TLS due to the fact that a human can be asked for additional authentication factors by the IDP as a part of the initial IDP login, it usually leaves open vulnerabilities for machine-to-machine connections where mutual TLS should be used as it serves as a critical second factor for authentication.

Ideally mutual TLS (mTLS) connections should be used, whereby both sides of the connection mutually authenticate each other at the transport layer using certificates or even pre-shared-keys which acts as an additional authentication factor. The use of mTLS ensures the Agent is authenticated with the server to validate that connections are only coming from valid and known sources as opposed to a threat actors machine armed with a stolen access token.

The third task is to ensure access tokens have as short a lifetime as possible such that if they are stolen, their useful lifetime is limited. For human Agents, this translates to having to re-authenticate with the IDP, so many of these use cases tend to have lifetimes set in hours or days. For machine based Agents, the lifetime should be set as short as possible without putting unnecessary burden on the IDP. Often this translates to 15 to 60 minutes.

The four task is to ensure that some means of restricting who can connect to servers (defense-in-depth) is vital in that this serves a very significant hurdle to overcome should an access token be stolen. Servers that require all connections to be mTLS based protects itself with a critical second factor ensuring that only authenticated connections can be used to send access tokens. This second factor significantly reduces the attack surface in which access tokens can be abused. Using mTLS combined with cloud privacy or network security groups or layer 3 segmentation solutions go even further with their defense-in-depth attack surface reductions.

Conclusion:

Applications and infrastructure must ensure they adhere to solid code and implementation practices such as the ones described in this blog when access tokens are used as the center of the authentication strategy. Best practice of short expirations, never persisting tokens and using mTLS for all connections combined with network capabilities such as network or privacy groups or segmentation are all required to ensure sufficient defense-in-depth protective controls are in place.

The post The Cloud has a serious and fragile vulnerability: Access Tokens appeared first on TrustFour: TLS Compliance Monitoring.

*** This is a Security Bloggers Network syndicated blog from | TrustFour: TLS Compliance Monitoring | TrustFour: TLS Compliance Monitoring authored by Robert Levine. Read the original post at: https://trustfour.com/the-cloud-has-a-serious-and-fragile-vulnerability-access-tokens/

Originally posted here:
The Cloud has a serious and fragile vulnerability: Access Tokens - Security Boulevard

Read More..

Touring the Intel AI Playground – Inside the Intel Developer Cloud – ServeTheHome

We had the opportunity to do something that is just, well cool. In August, just as STH was preparing to move from Austin to Scottsdale, I had the opportunity to head up to Oregon and tour something I had been asking about for at least a year. My questions to Intel were: What is the Intel Developer Cloud? Where does it run out of? Is it only a few systems set up that you get short-term SSH access to? All those and more were answered when I visited the Intel Developer Cloud.

As a quick note, we are going to say this is sponsored since we had to fly up to Oregon to do this piece, and also, this is not common access. It took well over a year to go from the idea to getting the approvals to doing the tour. As with everything on STH, our team produces this content independently, but we just wanted to call this out.

As one would imagine, we also have a video for this article since it was so cool to see.

For those who have not seen this yet, the Intel Developer Cloud is the companys place to try out systems in a cloud environment with various technologies, including Intel Xeon, Xeon Max, GPU Flex Series, GPU Max Series, and (formerly Habana) Gaudi. Something that I did not know before the tour was that Intel has service tiers ranging from a more limited free test drive tier to paid plans for developers and teams. There is another option for enterprises that need larger scale deployments as a more customized program.

One can create an account, and through Intels various developer account types and presumably some sales logic, different platforms become available to you to try.

At that point, SSH credentials are deployed alongside instances running on hardware, and access is granted to develop on platforms. Some plans have early access to hardware, support, and additional toolkits. Here is an example of starting an instance with a 4th Gen Intel Xeon Scalable (Sapphire Rapids) Platinum 8480+ and four PCIe GPUs (Max 1000):

That was only part of the equation, however. What I wanted to know, and pushed Intel to let me get access to, was the hardware that this is actually running on. After many approvals, we got access to another part of a data center suite in Oregon that I had been to previously. We are only allowed to show a smaller fraction of this suite (one of many suites in the DC), but I grabbed this photo for some scale of the floor size. Check out the lights overhead as they extend well beyond the cage we are in, and it is a fence behind me, not even the corner of the floor.

What I can show is a few photos from my December 2022 visit to the facility from perhaps the opposite corner from the above photo. There, Intel has seemingly countless systems running large scale testing for things like reliability at scale for cloud providers.

At the time, it was exciting to see how many of these systems were running Sapphire Rapids that would not be launched for about a month after this visit.

Intel also has areas here that go far beyond what looks like standard servers. There are things like these development systems where Intel can cause voltage drops on different parts of the platform and more to test what would happen if a component failed, for example.

Hello to theLantronix Spider we reviewed in the fun photo above.

For some sense of scale, this was just one aisle in the facility with systems set up for this type of testing.

The bottom line is that Intel has its own farm of development systems just down from its Jones Farm campus and Oregon fabs. Whether it is for the Intel Developer Cloud today, testing at scale for reliability for cloud customers, or doing platform development work, there is a lot here. Knowing that we are going to show off just a tiny portion, of this suite in this single facility should give some sense of the Intel development scale.

With that, let us get to our tour.

Read more:
Touring the Intel AI Playground - Inside the Intel Developer Cloud - ServeTheHome

Read More..

SPAs and React: You Don’t Always Need Server-Side Rendering – The New Stack

As you may have noticed, the Start a New React Project section of the React docs no longer recommends using CRA (Create React App). Create React App used to be the go-to approach for building React applications; (that only required client-side routing and page rendering). Now, however, the React docs suggest picking one of the popular React-powered frameworks that support server-side rendering (SSR).

Ive built applications with everything youll see on that list of production-grade React frameworks, but I also spent many years building SPAs (Single Page Applications) that only needed client-side functionality and everything was fine.

Whilst there are many applications that do need server-side rendering, there are also many applications that dont. By opting to choose an SSR React framework, you might be creating problems rather than solving them.

As the acronym suggests, an SPA only has a single page. An SPA might have navigation, but when you click from page to page what youre experiencing are routes, not pages. When you navigate to a new route, React takes over and hydrates the page with HTML and (usually) data that has been sourced using a client-side HTTP request.

SSR applications are different. Server-side rendered applications actually do have pages. The data is sourced on the server, where the page is compiled, and then the final output is sent to the browser as a complete HTML webpage.

As noted, with SSR you need a server and usually this will involve a cloud provider. If your SSR framework only really works with one cloud provider, you might end up experiencing vendor lock-in. Thankfully frameworks like Remix and Astro are server agnostic, so you can either bring your own server, or use an adapter to enable SSR in your cloud provider of choice.

One problem that crops up again and again is spinner-geddon; where each time you navigate to a new page, youre presented with a spinner animation to indicate that data is being requested, and only after a successful HTTP request completes will the page become hydrated with content.

An SPA isnt great for SEO (Search Engine Optimization) either, because as far as Google is concerned, the page is blank. When Google crawls a webpage, it doesnt wait for HTTP requests to complete, it just looks at the content/HTML in the page, and if theres no HTML then how can Google rank the page?

Because of this (and a number of other reasons), there has been a shift in React application development towards server-side rendering. But, whilst both of the above sound like considerable problems are they, really?

The classic developer response will likely be: It depends. And it really does! Ill now tell you a short story about an SPA I built a few years ago, so you can judge for yourselves.

Rewind the clock to 2018, Id been hired by a tech consultancy company that had been brought in to perform a digital transformation for a large financial institution based in London.

My first project was to build a browser-based solution that would replace an antiquated piece of licensed software that was no longer fulfilling its duties, not to mention costing the company money. The application was for internal use only and would only ever have three users: Margaret, Celia and Evelyn, a delightful team of people who were nearing retirement age, but who played an important role in the firm.

The application I built took approximately eight weeks to complete, only used client-side HTTP requests to fetch data from an API, had authentication, was deployed using an existing Azure DevOps pipeline, and wasnt Search Engine Optimized.

Margaret, Celia and Evelyn absolutely loved it, and they didnt mind the occasional spinner since the app solved a problem for them. It also solved a problem for the firm: no more expensive software licensing. I have it on good authority that its still in use today. I also happen to know that Margaret, Celia and Evelyn have all since retired, in case you were wondering.

I think they are. There are many internal applications that will never see the outside world and wont need to use any of the features that come with the more modern React-powered SSR frameworks. But since the React docs are no longer recommending CRA, what else could you use if you were building an SPA today?

Vite can be used alongside React and steps in as a more modern replacement to Webpack (the module bundler used by CRA).

Vite is a build tool that aims to provide a faster and leaner development experience for modern web projects.

I thought about turning this into a tutorial, but theres really no point.

The Vite docs cover everything youll need to know in the Scaffolding Your First Vite Project section; and choosing from the CLI prompts, youll have a React app up and running in about ~20 seconds.

Youll also see from the image above, Vite isnt only a great choice for building React applications, its also suitable for use with other frameworks.

In short, bundling.

When developing an application, code is split up into smaller modules. This makes features easier to develop and allows common code to be shared among different parts of the application. But, at some point, all those modules need to be bundled together to form one giant JavaScript file. This giant JavaScript file is required by the browser to run the application.

Bundling occurs whenever a file is saved (which happens hundreds of thousands of times during the course of development). With tools like Webpack, bundles have to be torn down and rebuilt to reflect the changes. Only after this bundling step is complete will the browser refresh, which in turn allows developers to actually see their changes.

As an application grows, and more and more JavaScript is added, the bundler has more and more work to do. Over time, this bundling step starts to take longer and can really affect developer productivity. Vite addresses this by leveraging native ES Modules and HMR (Hot Module Replacement).

With Vite, when a file is saved, only the module that changed is updated in the bundle. This results in a much faster bundling step and a much more productive and pleasant development experience.

There are a number of other benefits to using Vite, which have been clearly explained in the docs: Why Vite.

So there you have it, out with the old and in with the newbut, the legacy of the React SPA can live on!

Naturally, there are many cases where an SPA isnt the most suitable choice. However, when it comes to SPA or SSR, its not either-or its both-and.

Follow this link:
SPAs and React: You Don't Always Need Server-Side Rendering - The New Stack

Read More..

I went to paradise to see the future of AI, and I’m more confused than … – The Verge

Im here to see the future of computing. But at the moment, Im trying to coax a butterfly onto a nectar-dipped stick.

I feel like Im bothering the insects, but the monarch butterfly caretakers accompanying us in a 10-foot screened-in box insist that its okay, so I follow their instructions and keep gently prodding the feet of whichever butterfly is closest, willing it to hold on.

As we each gradually work on securing a butterfly, one of the butterfly experts asks our small group politely how our product launch is going. Theres a brief, collective silence. None of us have the energy to explain that its not our launch; were just here to cover and analyze it. But rather than explain this deeply boring backstory, someone in our group mercifully pipes up, Its going great.

After many failed attempts, I finally get one of the little guys to hang on. Theres a rush of pride as I turn to the rest of the group and announce, Look, I got one! And then theres nothing to do except stand awkwardly, wondering what comes next.

Qualcomm CEO Cristiano Amon presented his keynote speech with all of his usual vigor.

Qualcomms annual Snapdragon Summit is weird like that. Every year, the company invites a lot of industry partners, analysts, and members of the press to Hawaii to bear witness to its next flagship chip announcement. Ill tell you right now that industry partners, analysts, and members of the press are largely indoorsy people who are wholly unaccustomed to tropical climates.

By the end of day two, Id sweated through every item of clothing I packed and started doing laundry in my hotel room sink. On the plus side, my rooms patio is so hot that my clothes are bone-dry in a few hours. (Per The Verges ethics policy, we dont accept paid trips. With the exception of a few prearranged group meals, Vox Media paid for my travel, lodging, food, and other expenses.)

Our butterfly encounter is part of a circuit of demo stations designed to show off the capabilities of the companys latest tech. The stations are all positioned outside in the midday tropical sun, and by the time we get to the butterfly area, we are looking generally unwell and quite damp. Qualcomm has done a conscientious thing of incorporating elements of traditional Hawaiian culture into each station alongside its technology demos. Some are loosely connected; we learn the history of slack-key guitar while we try out a new audio switching technology.

Our first stop included a demonstration where we learned how poi is made, which is a traditional Hawaiian food staple.

Others dont tie in as neatly, and an hour into the session, Im not clear on what the monarch butterflies have to do with the next generation of mobile computing, but Im too hot to care. After a while, our butterfly guides show us how to gently grasp a butterfly by holding its closed wings between two fingers, and were instructed to take one out of the enclosure and release them en masse as we each make a wish. My mind flips rapidly through about a half-dozen, from thoughts of peace and healing for the people of Maui, where we are visitors, to, Id like to get out of the sun as quickly as possible.

With our butterflies free, we step over to the tech demo station and see one of the features Ive been waiting for: generative photo expansion. Its a feature supported by the Snapdragon 8 Gen 3, the mobile chipset Qualcomm has just announced. You pinch and zoom out of an image and watch as generative AI fills in the borders in a matter of seconds.

The concept is neat; the demo itself is a mixed bag. It handled some preloaded scenes quite well. But when challenged to fill in part of a picture of a face, things dont go so well. Later on, I would see similar results sometimes its incredibly impressive, but one time, it adds a disembodied sexy leg alongside a landscape. Other demos throughout the summit are a similar mix of impressive and not-quite-right. A couple of onstage demonstrations of on-device text generation go slightly sideways: what starts with a request to plan a trip from San Diego to Seattle shifts mid-demo to a trip from Maui to Seattle. Impressive, until it isnt.

And that kind of sums up my feelings about the vision of a generative AI future I was shown over the week. The most optimistic scenario is the picture Qualcomm executives painted for me through its keynotes and a series of interviews: that on-device AI is truly the next turn in mobile computing our phones wont be the annoying little boxes of apps that theyve turned into. AI will act as a more natural, accessible interface and a tool for all the things we want our devices to do. Well regain the mental and emotional overhead we spend every day on tapping little boxes and trying to remember what we were doing in the first place as we get lost in a sea of unscheduled scrolling.

Impressive, until it isnt

AI could also be a real dumpster fire, too. Theres all the potential for misuse that could undo the very fabric of our society. Deepfakes, misinformation, you know, the real bad stuff. But the AI were probably going to encounter the most just seems annoying. One of the demos were shown features a man talking to a customer service AI chatbot about his wireless plan upgrade options, which is a totally pleasant exchange that also sounds like a living nightmare. You better believe that AI chatbots are about to start showing up in a lot of places where were accustomed to talking to a real person, while the barriers to letting you just TALK TO A REPRESENTATIVE grow ever higher.

To someone who isnt constantly immersed in the whirling hot tub that is the consumer tech news cycle, this latest Coming of AI might sound thoroughly unimpressive. Hasnt AI been around for a while now? What about the AI in our phone cameras, our assistants, and ChatGPT? The thing to know and the thing that Qualcomm takes great pains in emphasizing over the course of a week is that when the AI models run on your device and not in the cloud, its different.

If you had to wait 15 or 20 seconds for confirmation every time you asked Google to set a timer, youd never use it again

The two keywords in this round of AI updates are generative and on-device. Your phone has already been using software trained on machine learning to decide which part of your photo is the sky and how blue it should be. This version of AI runs the machine learning models right on your phone and uses them to make something new a stormy sky instead of a blue one.

Likewise, ChatGPT introduced the world to generative AI, but it runs its massive models in the cloud. Running smaller, condensed models locally allows your device to process requests much faster and speed is crucial. If you had to wait 15 or 20 seconds for confirmation every time you asked Google to set a timer, youd never use it again. Cutting out the trip to the cloud means you can reasonably ask AI to do things that often involve several follow-up requests, like generating image options from text again and again. Its private, too, since nothing leaves your phone. Using a tool like Googles current implementation of Magic Editor requires that you upload your image to the cloud first.

Generative AI as a tool has well and truly arrived, but what Im trying to understand on my trip to the tropics is what it looks like as a tool on your phone. Qualcomms senior vice president of technology planning Durga Malladi provides the most compelling, optimistic pitch for AI on our phones. It can be more personal, for one thing. When I ask for suggested activities for a week in Maui, on-device AI can take into account my preferences and abilities and synthesize that information with data fetched from the cloud.

Beyond that, Malladi sees AI as a tool that can help us take back some of the time and energy we spend getting what we want out of our phones. A lot of the time you have to think on its behalf, learn how to operate the device. With AI at your disposal, he says its the other way around. Big if true!

The advanced speech recognition possible with on-device language models means you can do a lot more by just talking to your phone and voice is a very natural, accessible user interface. What AI brings to the table now is a much more intuitive and simple way of communicating what you really need, says Malladi. It can open up mobile computing to those who have been more or less shut out of it in the past.

Its a lovely vision, and to be honest, its one Id like to buy into. Id like to spend less time jumping from app to app when I need to get something done. Id like to ask my phone questions more complex than Whats the weather today? and feel confident in the answer I get. Outsourcing the boring stuff we do on our phones day in and day out to AI? Thats the dream.

Honor CEO George Zhao presented a glimpse of what on-device AI will look like when it reaches, you know, actual devices.

But as I am reminded often on my trip, Qualcomm is a horizontal solutions provider, meaning they just make the stuff everyone else builds on top of. Whatever AI is going to look like on our phones is not ultimately up to this company, so later on in the week, I sat down with George Zhao, CEO of Honor, to get the phone-makers perspective. In his view, on-device AI will and should work hand-in-hand with language models in the cloud. They each have technical limitations: models like Chat GPTs are massive and trained on a wide-ranging data set. Conversely, the smaller AI models that fit on your phone dont need to be an expert on all of humanity they just need to be an expert on you.

Referencing an example he demonstrated onstage earlier in the day, Zhao said an on-device AI assistant with access to your camera roll can help sort through videos of your child and pick out the right ones for a highlight reel you dont need to give a cloud server access to every video in your library. After that, the cloud steps in to compile the final video. He also reiterates the privacy advantage of on-device AI, and that its role in our lives wont be to run all over our personal data with wild abandon it will be a tool at our disposal. Personal AI should be your assistant to help you manage the future the AI world, he says.

Its a lovely vision, and I think the reality of AI in the near future lies somewhere between dumpster fire and a new golden age of computing. Or maybe it will be both of those things in small portions, but the bulk of it will land somewhere in the middle. Some of it really will be revolutionary, some of it will be used for awful things. But mostly itll be a lot of yelling at chatbots to refill your prescription or book a flight or asking your assistant to coordinate a night out with friends by liaising with their AI assistants.

It strikes me that the moments I appreciated the most on my trip to Maui werent in the tech demos or keynotes. They were in the human interactions, many of them unexpected, in the margins of my day. Talking about relentless storms on the Oregon coast with Joseph, my Uber driver. The jokes and in-the-trenches humor shared with my fellow technology journalists. The utter delight and surprise shared with other swimmers as a giant sea turtle cruised by just under the waves. (A real thing that happened!) The alohas and mahalos as I pay for my groceries and order my coffee.

Just a happy monarch butterfly chompin on a treat.

Sandra, another Uber driver, has printed lists of recommended restaurants and activities in her car. One comes with a tip to Tell them Sandy sent you, and theres a directive to check under the passenger seat for a notebook with more suggestions. Id rather walk into a restaurant and say Sandy sent me than My AI personal assistant sent me.

I dont think were headed for a future where AI replaces all of our cherished human interactions, but I do think a future where we all have a highly personalized tool to curate and filter our experiences holds somewhat fewer of these chance encounters. Qualcomm can set the stage and paint rosy pictures of an inclusive AI future, but thats the job of a tech company organizing an annual pep rally in the tropics to talk about its latest chips. What happens next will likely be messy and at times ugly, and it will be defined by the companies that make the software that runs on those chips.

Qualcomm got the butterfly onto the stick. Now what?

Photography by Allison Johnson / The Verge

Read more here:
I went to paradise to see the future of AI, and I'm more confused than ... - The Verge

Read More..

Reclaiming Control Through Repatriation for Cloud Optimization – Sify

Role of a capable partner in guiding an organisation through this intricate maze of optimisation cannot be overstated.

The corporate world has been split by cloud computing. While it has undoubtedly generated compelling value propositions for global organisations, it has also produced a number of concerns. In this post, we will look at how organisations may optimise their operations by implementing a cloud repatriation plan.

However, its a heated debate in the corporate world. While the benefits are undeniable, so are the challenges. In this post, well explore the concept of cloud repatriation the process of moving data and applications from the public cloud back to on-premises or private cloud environments.

Well delve into the reasons why organisations are considering this strategy and how it can help them regain control and optimise their operations in the cloud era.

The corporate world finds itself at a crossroads when it comes to cloud computing. Undoubtedly, cloud technology has ushered in an era of unparalleled opportunities for global organisations. The promise of cloud-hosted digital environments, with their unmatched scalability, flexibility, and cost savings, has drawn forward-thinking businesses into its orbit. However, beneath the surface, a host of challenges have emerged, compelling many organisations to explore the concept of cloud repatriation as a means to optimise their operations.

So, what exactly is cloud repatriation, and why is it increasingly considered a worthwhile endeavour? In essence, repatriation is the process of moving data and applications from the public cloud back to an organisations on-premises data centre, private cloud, or a trusted hosting service provider. Its a strategic decision that isnt taken lightly and is not a one-size-fits-all solution. The ultimate goal is to discover and implement the most optimised architecture that seamlessly aligns with a companys unique business demands and objectives.

Managing cloud costs can be a daunting challenge if not executed efficiently. The Flexera 2023 State of the Cloud Report reveals that a staggering 82% of businesses identify managing cloud costs as their foremost obstacle. This challenge encompasses a web of factors, including data transfer costs, storage expenses, underutilised resources resulting from infrastructure sprawl, and the complexities of maintaining regulatory compliance.

Cloud security is a prominent concern for businesses, with 79% expressing reservations. Repatriating data or applications to on-premises infrastructure offers companies a greater degree of control over their security posture. This control extends to critical aspects like physical security measures, encryption techniques, network configurations, and access restrictions.

Navigating the cloud landscape can be as challenging as finding your way through a new city without a map or local guide. Its no wonder that 78% of businesses admit to struggling with a lack of resources and cloud-related skills.

Vendor lock-in adds another layer of complexity, as businesses become overly reliant on a single cloud provider for their infrastructure, services, or applications. Migrating data and applications becomes challenging, leading businesses to opt for repatriation as a strategy to avoid vendor lock-in.

In todays business environment, data security, compliance with location-specific data laws, and risk mitigation are paramount. Distant cloud environments can compromise data sovereignty and may not adhere to local data protection regulations. Businesses can lose control over how their data is processed and stored in various jurisdictions.

Increasingly, repatriated workloads are finding their rightful place in near-edge or on-premises edge locations. These locations offer benefits such as reduced latency, support for Internet of Things (IoT) use cases, and on-site data processing capabilities for real-time applications.

In light of these challenges, cloud repatriation emerges as a strategic choice for organisations seeking to regain control and optimise their cloud presence. It involves the movement of files and applications from the public cloud to a private cloud, hosting service provider, or an organisations on-site data centre.

As businesses undertake this journey to maximise their cloud presence, they inevitably find themselves evaluating the architecture of their existing security solutions and rethinking their network infrastructure. Thus, having a capable partner to navigate this complex terrain becomes invaluable.

The primary goal of cloud optimization is to utilise cloud computing resources in the most cost-effective and efficient manner possible. Repatriation can take on various forms, including hosted private clouds, multi-tenanted private clouds, and alternative deployment methods.

Recent research conducted by IDC reveals that clients are increasingly drawn to private cloud environments for both existing workloads and new projects born in the cloud, as opposed to public cloud settings. In response to this trend, system providers are now offering unified management platforms with administration, provisioning, and observability features. These platforms provide companies with access to specialised infrastructure that mirrors the user experience offered by public clouds. Projections from this research suggest that by 2024, the percentage of mission-critical applications operating in traditional dedicated data centres will decrease from 30% to 28%, while the percentage of updated versions of similar applications operating in private clouds will rise to 26%.

Modern businesses have the capacity to migrate some operations to the cloud without compromising data security while maintaining others on-site. This strategic flexibility allows them to harness the benefits of both environments.

To determine the optimal cloud optimization strategy, businesses must carefully evaluate their specific needs. However, its important to note that repatriation is a complex process. As organisations navigate this journey to maximise their cloud presence, they will inevitably need to assess the architecture of their existing security solutions and reconfigure their network infrastructure. Therefore, the role of a capable partner in guiding an organisation through this intricate maze cannot be overstated.

See the article here:
Reclaiming Control Through Repatriation for Cloud Optimization - Sify

Read More..

AI in the Biden Administrations CrosshairsSummarizing the … – JD Supra

On October 30, 2023, President Bidenannounceda sweeping newExecutive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence(EO). The EO signals an all-hands-on-deck approach, with roles for agencies across the federal government, proposed requirements and/or guidance that will apply both to companies that offer artificial intelligence (AI)-related services and those that consume such services, and still-unfolding implications for the legal operation of such businesses.

Highlights of the EO for providers and consumers of AI products and services follow, with our 10 top takeaways for private sector investors and companies immediately after:

Highlights

Ten Top Takeaways for AI Builders, AI Investors, and AI Users

In sum: keep watching this space.Affected companies should carefully monitor the implementation of this executive order and any follow-on actions by agencies under the EO.

[1]More specifically defined as an AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters [. . .]

[2]In January 2023, NIST released an Artificial Intelligence Risk Management Framework intended to provide a resource to organizations designing, developing, deploying, or using AI systems to manage risks and promote trustworthy and responsible development and use of AI systems. See our previous alert for more details.

[3]As one example, in an effort to slow Chinas development of advanced AI technologies, the DoC recently issued an array of semiconductor and supercomputer-related export controls. See recent client alerts here and here for a discussion of these export controls. As another, see our recent client alert on proposed outbound investment rules to restrict U.S. support for AI innovation in China here.

[4]Id.

Read more from the original source:
AI in the Biden Administrations CrosshairsSummarizing the ... - JD Supra

Read More..

How the UK crime agency repurposed Amazon cloud platform to … – ComputerWeekly.com

The UKs National Crime Agency (NCA) repurposed its cloud-based data analytics platform to help identify threats to life in messages sent by suspected criminals over the encrypted EncroChat phone network.

After placing a software implant on an EncroChat server in Roubaix, investigators from Frances digital crime unit infiltrated the encrypted phone network in April 2020, capturing 70 million messages.

The operation, supported by Europol, led to arrests in the Netherlands, Germany, Sweden, France and other countries of criminals involved in drug trafficking, money laundering and firearms offences. More than 1,100 people have been convicted under the NCAs investigation into the French EncroChat data, Operation Venetic, which has led to more than 3,000 arrests across the UK, and more than 2,000 suspects being charged.

UK police have seized nearly six and a half tonnes of cocaine, more than three tonnes of heroin and almost 14 and a half tonnes of cannabis, along with 173 firearms, 3,500 rounds of ammunition and 80m in cash from organised crime groups.

Europol supplied British investigators with overnight downloads of data gathered from phones identified as being in the UK, through Europols Large File Exchange, part of its Siena secure computer network.

With an estimated 9,000 UK-based EncroChat users, the NCA needed to quickly process a large volume of potentially incriminating data, so tasked its National Cyber Crime Unit (NCCU) with categorising it for human investigators to analyse. To automate the preprocessing of data once it had received the EncroChat material, NCCU staff added pre-built capabilities from Amazon Web Services (AWS) to its cloud data platform, including machine learning software with the capability to extract text, handwriting and data from EncroChat text messages and photographs.

For us, its about preventing harm and protecting the public, said an NCCU spokesperson, quoted in a technology company case study. We had a flood of unstructured data and had to operate swiftly to reduce harm to the public. Our data scientists could probably have devised ways of analysing this data themselves. But when we have more than 200 threats to life, we cant afford to spend time doing that. Using off-the-shelf services from AWS enabled us to go from a standing start to a full capability in the space of hours. If we were to build it ourselves from scratch, that might have taken over a month of effort.

The NCCU was able to scale-up its existing data analysis platform from tens of users in the NCA to 300 within two weeks of being informed of the EncroChat investigation.

Once the historic messages extracted from EncroChats in-phone database, called Realm, and live text messages sent from thousands of phones were processed, the NCA sent intelligence packages in the form of CSV files to Regional Organised Crime Units; the Police Service of Northern Ireland; Police Scotland; the Metropolitan Police; Border Force; the Prison Service; and HM Revenue & Customs.

These organisations were then responsible for analysing the data for further indications of threats to life, the drugs trade and other criminal activity.

The NCCU had been developing a cloud-based platform to analyse data for over three years before the EncroChat operation. Digital transformation consultancy Contino won the contract to build the platform on AWS.

By shifting from its on-premise infrastructure to the cloud, the NCCU said it has been able to spend more time on investigations, and less time on procuring and maintaining hardware and managing IT infrastructure.

Previously, we had on-premises infrastructure, which required a lot of management and prevented us from doing the data science we wanted to do, said an NCCU spokesperson. Our small tech team spent a considerable amount of time building and managing infrastructure.

This was a problem, because our recruitment and retention are based on providing people with engaging and challenging work fighting cyber crime, not administering IT.

Within a year of beginning its pilot of the analytics platform which used services including Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Relational Database Service (Amazon RDS) the NCCU introduced more advanced data processing capabilities.

This included the Amazon EMR big data platform, which helps scale and automate data processing, and AWS Glue, a serverless data integration service that can combine and organise data from a wide range of sources.

As a law enforcement agency that handles sensitive and therefore potentially harmful data, the NCA and NCCU also needed the platform to be secure, so used Amazon GuardDuty to monitor network activity to shield it from malicious activity.

Moving data outside of our perimeter is not a decision we take lightly, said an NCCU spokesperson. The transparency of AWS, its shared security model, and the access we had to documentation and experts assisted us on that journey considerably.

At the start of May 2021, the Netherlands Forensic Institute (NFI) announced that its forensic big data analysis (FBDA) team had similarly modified a computer model it had previously developed to scan for drug-related messages sent between suspected criminals in large volumes of communications data, as part of a research and development project.

The NFI told Computer Weekly at the time that the drug-talk software was developed in-house before being modified for threat-to-life detection and passed on to the police.

Using deep learning techniques, the FBDA team initially trained the models neural network in generic language comprehension by having it read webpages and newspaper articles, before introducing it to the messages of suspected criminals, so it could learn how they communicate.

The team then began using similar techniques to develop a model to recognise life-threatening messages, said the NFI in a statement. That model was ready when the chats from EncroChat poured into the police in Driebergen on 1 April.

Continue reading here:
How the UK crime agency repurposed Amazon cloud platform to ... - ComputerWeekly.com

Read More..

What’s the Difference Between a Web Developer and a Software … – Dice Insights

Web developers and software engineers are popular roles within tech. Is there a lot of overlap between them? If not, what are the key differences? Were going to break down the differences between a web developer and a software engineer, and highlight what makes both roles unique.

In simplest terms, web developers build and maintain websites, web applications, and services. Depending on their interests and specialization, they may focus on the front-end (i.e., what web users see and do), the back-end (i.e., the servers and other components that actually keep websites and services running), or both. In the course of a typical day, this can mean engaging in tasks such as:

According to Lightcast (formerly Emsi Burning Glass), which collects and analyzes millions of job postings from across the country, companies frequently ask for the following skills in web developer job postings:

Those web developers who opt to focus on the front end will generally need to have a grasp of the following:

Meanwhile, those who want to concentrate their efforts on the back end will need to master skills including (but certainly not limited to):

Anyone who wants to become a master at full-stack web development will need to know how to use all of the above skills. That might seem like a lot, but keep in mind that many organizations will opt to hire a full-stack developer over someone who specializes exclusively in the front- or back-end; the difficulty of mastering the core concepts is commensurate with the opportunities out there.

Software engineers have a broad scope of responsibilities, and their daily tasks can vary wildly depending on their respective organizations goals. In general, software engineering involves:

Lightcasts necessary skills for a software engineer, based on job postings, include:

But that can also depend on the organizations needs; for instance, a software engineer tasked with mobile development will absolutely need to know the programming languages involved in building apps and services for iOS and Android, including Objective-C, Swift, Java, and Kotlin.

Although the technical skills utilized by a software engineer might differ considerably from those needed by a web developer, the professions do share some commonalities. Specifically, both web developers and software engineers need to understand the principles of software design, and they need effective soft skills (such as communication and teamwork) in order to accomplish their goals and work with other stakeholders.

Both web developers and software engineers are in high demand, and those who want to jump from one role to the other will find a lot thats familiar, especially when it comes to using programming languages to build services and apps.

The core difference between web developers and software engineers is obviously focus: web developers work on websites and applications, whereas software engineers can focus on anything from desktop and mobile software to cloud infrastructure. They use different tools and programming languages to achieve their respective ends.

Dices latest Tech Salary Reportsuggests software engineers can earn quite a bit, especially with specialization and seniority. For example, a principal software engineer can earn $153,288, while a cloud engineer can pull down $145,416. Back-end software engineers earn slightly lower ($129,150), just ahead of data engineers ($122,811) and systems engineers ($120,800).

Meanwhile, the Tech Salary Report puts the average web developer salary at $87,194but keep in mind that number can climb far higher with specialization and experience. (The Report also puts the average tech professional salary at $111,348, up 2.3 percent year-over-year).

See the article here:
What's the Difference Between a Web Developer and a Software ... - Dice Insights

Read More..

Data Management Workloads Drive Spending on Compute and … – IDC

NEEDHAM, Mass., November 2, 2023 Structured Databases/Data Management workloads are driving the largest share of enterprise IT infrastructure spending in the first half of 2023 (1H23), according to the International Data Corporation (IDC) Worldwide Semiannual Enterprise Infrastructure Tracker: Workloads. Organizations spent $6.4 billion on compute and storage hardware infrastructure to support this workload category in 1H23, which represents 8.5% of the market total.

Despite the high level of spending, Structured Databases/Data Management wasn't among the fastest growing workloads with just 1.1% annual growth. Industry Specific Business Applications saw growth of 33.3% in value compared to 1H22. HR/Human Capital Management (HCM), Business Intelligence/Data Analytics, and Development Tools and Applications workloads experienced double-digit year-over-year growth in hardware infrastructure demand with spending growing at 28.5%, 10.4%, and 10.3% respectively. However, only Business Intelligence/Data Analytics ranks in the top 5 workloads of hardware spending while the other two workloads (HR/Human Capital Management (HCM) and Development Tools and Applications) rank 15 and 10 in spending.

Workloads spending profiles vary among product categories. For ODM Direct the highest spending in 1H23 was concentrated on Digital Services with $2.6 billion representing 11.7% of ODM spending. In comparison, OEM Servers and OEM Storage spending is led by Structured Databases/Data Management.

Workloads priorities vary within regions as well, with Asia/Pacific's spending for AI Lifecycle in 1H23 at $2.0 billion just behind Structured Databases/Data Management. Infrastructure spending was the second largest workloads category in Europe, Middle East and Africa (EMEA) during 1H23 at $0.9 billion. In the Americas, the largest workload in 1H23 was Digital Services at $2.8 billion with ODMs representing 46% of total infrastructure spending in the region for the first half of the year.

As enterprise workloads continue to move into public cloud, investments in shared infrastructure (a hardware base for delivering public cloud services) will be increasing faster than investments in dedicated infrastructure across all workloads. Spending for workloads in cloud and shared infrastructure environments will grow at a compound annual growth rate (CAGR) of 11.6% over the next five years with Digital Services and AI Lifecycle spending leading the way. IDC predicts spending for Digital Services will reach $13.4 billion in 2027 and AI Lifecycle $8.1 billion with five-year CAGRs of 15%. Infrastructure spending for cloud and dedicated environments will grow at a 10.7% CAGR over the next five years, with AI Lifecycle being the fastest growing workload with a five-year CAGR of 16.3%. By 2027, IDC expects AI Lifecycle to be the second largest workloads category in terms of spending at $3.9 billion.

Over the next five years, IDC forecasts growth in compute and storage systems spending for cloud-native workloads to be almost twice as high as that of infrastructure supporting traditional workloads (12.2% vs 6.2% CAGR) although traditional workloads will continue to account for the majority of spending during the forecast period (71% in 2027).

Spending for workloads in non-cloud infrastructure environments will grow at a 1.7% CAGR over the next five years with Text and Media Analytics and AI Lifecycle as the fastest growing workloads with five-year CAGRs of 9% and 6.1% respectively. Structured Database/Data Management, Content Applications, and Business Intelligence/Data Analytics workloads combined will account for 24% of spending in 2027 while Text and Media Analytics and AI Lifecycle combined will only account for 11.3% of spending in the same year.

Taxonomy Notes

IDC estimates spending on compute and storage systems across 19 mutually exclusive workloads, defined as applications and their datasets. The full taxonomy including definitions of the workloads can be found in IDC's Worldwide Semiannual Enterprise Infrastructure Tracker: Workloads Taxonomy, 2023 (IDC #US51045423). The majority of workloads map to secondary or functional software markets while several, including Content Delivery and Digital Services, have no equivalent in the software market structure. Workloads are further consolidated into seven workload categories, which include: Application Development & Testing, Business Applications, Data Management, Digital Services, Email/Collaborative & Content Applications, Infrastructure, and Technical Applications.

IDC's Worldwide Semiannual Enterprise Infrastructure Tracker: Workloads provides insight into how enterprise workloads are deployed and consumed in different areas of the enterprise infrastructure hardware market and what the projections are for future deployments. Workload trends are presented by region and infrastructure platform and shared for the enterprise infrastructure hardware market with a five-year forecast. This Tracker is part of the Worldwide Quarterly Enterprise Infrastructure Tracker, which provides a holistic total addressable market view of the four key enabling infrastructure technologies for the datacenter (servers, external enterprise storage systems, and purpose-built appliances: HCI and PBBA).

For more information about IDC's Semiannual Enterprise Infrastructure Tracker: Workloads, please contact Lidice Fernandez at lfernandez@idc.com.

About IDC Trackers

IDC Tracker products provide accurate and timely market size, vendor share, and forecasts for hundreds of technology markets from more than 100 countries around the globe. Using proprietary tools and research processes, IDC's Trackers are updated on a semiannual, quarterly, and monthly basis. Tracker results are delivered to clients in user-friendly Excel deliverables and on-line query tools.

Click here to learn about IDCs full suite of data products and how you can leverage them to grow your business.

About IDC

International Data Corporation (IDC) is the premier global provider of market intelligence, advisory services, and events for the information technology, telecommunications, and consumer technology markets. With more than 1,300 analysts worldwide, IDC offers global, regional, and local expertise on technology, IT benchmarking and sourcing, and industry opportunities and trends in over 110 countries. IDC's analysis and insight helps IT professionals, business executives, and the investment community to make fact-based technology decisions and to achieve their key business objectives. Founded in 1964, IDC is a wholly owned subsidiary of International Data Group (IDG), the world's leading tech media, data, and marketing services company. To learn more about IDC, please visit http://www.idc.com. Follow IDC on Twitter at @IDC and LinkedIn. Subscribe to the IDC Blog for industry news and insights.

All product and company names may be trademarks or registered trademarks of their respective holders.

"); tb_show("Share the image", "#TB_inline?height=200&width=600&inlineId=embedDialog", null); } function calculateContainerHeight(attachmentId) { var img = $("img[src*=" + attachmentId + "]"); if (img === undefined) { return 600; } else { img = img[0]; } var iframeHeight; if (img.naturalWidth < 600) { iframeHeight = img.naturalHeight + 100; } else { iframeHeight = (img.naturalHeight / (img.naturalWidth / 600)) + 100; } return Math.ceil(iframeHeight) + 10; } function copyEmbedCode() { $("#embedCodeArea").select(); document.execCommand('copy'); $("#copyButton").val("Copied"); setTimeout(function() { $("#copyButton").val("Copy"); }, 2000); } $(".icn-wrapper a.embed-image-button").click(function(e) { e.preventDefault(); });

More here:
Data Management Workloads Drive Spending on Compute and ... - IDC

Read More..

New Quantum Effect Could Mean The Kondo State Isn’t What We Thought – ScienceAlert

A super-small, highly precise, ultra-cold physics experiment has revealed a brand new quantum state, called the spinaron.

It occurs under extremely cold conditions when a cobalt atom on a copper surface is subjected to a strong magnetic field, causing its direction of spin to flip back and forth.

The discovery could trigger a major rethink of assumptions on how low-temperature conductive materials behave, according to physicists from the Julius Maximilian University of Wrzburg (JMU) and the Jlich Research Centre in Germany.

The researchers were able to see the magnetic spin of the cobalt atom in the experimental setup thanks to the combination of the intense magnetic field and an iron tip added to their atomic-scale scanning tunneling microscope.

This spin wasn't rigid, but rather continually switching back and forth, which then excited the electrons of the copper surface. To use an analogy very helpful in high-level physics, the cobalt atom is like a spinning rugby ball.

"When a rugby ball spins continuously in a ball pit, the surrounding balls are displaced in a wave-like manner," says experimental physicist Matthias Bode from JMU.

"That's precisely what we observed the copper electrons started oscillating in response and bonded with the cobalt atom."

The new observations had previously been predicted, and challenge existing thinking on something called the Kondo effect: a curious lower limit to electrical resistance when magnetic impurities are present in cold materials.

In these new experiments, the cobalt atom stays in constant motion, maintaining its magnetism even while interacting with the electrons. Under the rules of the Kondo effect, however, the magnetic moment would be neutralized by the electron interactions.

Since the 1960s, scientists have used the Kondo effect to explain certain types of quantum activity when metals such as cobalt and copper are combined. Now, some of that long-standing thinking might have to be changed and the researchers are looking for other scenarios where spinarons could apply instead of the Kondo effect.

"We suspect that many might actually be describing the spinaron effect," says experimental physicist Artem Odobesko from JMU, adding: "If so, we'll rewrite the history of theoretical quantum physics."

Quantum physics can be difficult to get your head around, but every breakthrough like this leads scientists to a greater understanding of how materials and the forces on them work together at the atomic level.

And the researchers themselves acknowledge the tension between making such an important discovery in highly precise and extreme lab conditions and yet not really having any immediate practical use for it.

"Our discovery is important for understanding the physics of magnetic moments on metal surfaces," says Bode. "While the correlation effect is a watershed moment in fundamental research for understanding the behavior of matter, I can't build an actual switch from it."

The research has been published in Nature Physics.

Originally posted here:

New Quantum Effect Could Mean The Kondo State Isn't What We Thought - ScienceAlert

Read More..