Page 3,439«..1020..3,4383,4393,4403,441..3,4503,460..»

Artificial Intelligence can improve CT screening to identify patients infected with the Coronavirus – EdexLive

Image for representational purpose only

Researchers are developing a new technique using Artificial Intelligence (AI) that would improve CT screening to more quickly identify patients infected with COVID-19.

The new technique will reduce the burden on the radiologists tasked with screening each image, according to a research team from the University of Notre Dame in the US. Testing challenges have led to an influx of patients hospitalised with COVID-19 requiring CT scans which have revealed visual signs of the disease, including ground-glass opacities, a condition that consists of abnormal lesions, presenting as a haziness on images of the lungs.

"Most patients with coronavirus show signs of COVID-related pneumonia on a chest CT but with a large number of suspected cases, radiologists are working overtime to screen them all," said study lead author Yiyu Shi from the Notre Dame. "We have shown that we can use deep learning -- a field of AI -- to identify those signs, drastically speeding up the screening process and reducing the burden on radiologists," Yiyu added.

The research team is working to identify the visual features of Coronavirus-related pneumonia through analysis of 3D data from CT scans. The team is working to combine the analysis software with off-the-shelf hardware for a light-weight mobile device that can be easily and immediately integrated into clinics around the country.

The challenge, Shi said, is that 3D CT scans are so large, it's nearly impossible to detect specific features and extract them efficiently and accurately on plug-and-play mobile devices.

"We're developing a novel method inspired by Independent Component Analysis, using a statistical architecture to break each image into smaller segments, which will allow deep neural networks to target COVID-related features within large 3D images," Shi wrote.

The research team is collaborating with radiologists at Guangdong Provincial People's Hospital in China and the University of Pittsburgh Medical Centre, where a large number of CT images from COVID-19 pneumonia are being made available. The team hopes to have development completed by the end of the year.

The rest is here:
Artificial Intelligence can improve CT screening to identify patients infected with the Coronavirus - EdexLive

Read More..

Lost your job due to coronavirus? Artificial intelligence could be your best friend in finding a new one – The Conversation US

Millions of Americans are unemployed and looking for work. Hiring continues, but theres far more demand for jobs than supply.

As scholars of human resources and management, we believe artificial intelligence could be a boon for job seekers who need an edge in a tight labor market like todays.

Whats more, our research suggests it can make the whole process of finding and changing jobs much less painful, more effective and potentially more lucrative.

Over the last three years, weve intensely studied the role of AI in recruiting. This research shows that job candidates are positively inclined to use AI in the recruiting process and find it more convenient than traditional analog approaches.

Although companies have been using AI in hiring for a few years, job applicants have only recently begun to discover the power of artificial intelligence to help them in their search.

In the old days, if you wanted to see what jobs were out there, you had to go on a job board like Monster.com, type in some keywords, and then get back hundreds or even thousands of open positions, depending on the keywords you used. Sorting through them all was a pain.

Today, with AI and companies like Eightfold, Skillroads and Fortay, it is less about job search and more about matchmaking. You answer a few questions about your capabilities and preferences and provide a link to your LinkedIn or other profiles. AI systems that have already logged not just open jobs but also analyzed the companies behind the openings based on things like reputation, culture and performance then produce match reports showing the best fits for you in terms of job and company.

Typically, there is an overall match score expressed as a percentage from 0% to 100% for each job. In many cases the report will even tell you which skills or capabilities you lack or have not included and how much their inclusion would increase your match score. The intent is to help you spend your time on opportunities that are more likely to result in your getting hired and being happy with the job and company after the hire.

Usually, when you look for a job, you apply to lots of openings and companies at the same time. That means two choices: save time by sending each one a mostly generic resume, with minor tweaks for each, or take the time and effort to adjust and tailor your resume to better fit specific jobs.

Today, AI tools can help customize your resume and cover letter for you. They can tell you what capabilities you might want to add to your resume, show how such additions would influence your chances of being hired and even rewrite your resume to better fit a specific job or company. They can also analyze you, the job and the company and craft a customized cover letter.

While researchers have not yet systemically examined the quality of human- versus AI-crafted cover letters, the AI-generated samples weve reviewed are difficult to distinguish from the ones weve seen MBA graduates write for themselves over the last 30 years as professors. Try it for yourself.

Granted, for lots of lower-level jobs, cover letters are relics of the past. But for higher-level jobs, they are still used as an important screening mechanism.

Negotiations over compensation are another thorny issue in the job search.

Traditionally, applicants have been at a distinct informational disadvantage, making it harder to negotiate for the salary they may deserve based on what others earn for similar work. Now AI-enabled reports from PayScale.com, Salary.com, LinkedIn Salary and others provide salary and total compensation reports tailored to job title, education, experience, location and other factors. The data comes from company reported numbers, government statistics and self-reported compensation.

For self-reported data, the best sites conduct statistical tests to ensure the validity and accuracy of the data. This is only possible with large databases and serious number crunching abilities. PayScale.com, for example, has over 54 million respondents in its database and surveys more than 150,000 people per month to keep its reports up-to-date and its database growing.

Although no academics have yet tested if these reports result in better compensation packages than in the old days, research has long established that negotiating in general gets candidates better compensation offers, and that more information in that process is better than less.

[Deep knowledge, daily. Sign up for The Conversations newsletter.]

Use of these tools is growing, especially among young people.

A survey we conducted in 2018 found that half of employed workers aged 18 to 36 said that they were likely or highly likely to use AI tools in the job search and application process. And 64% of these respondents felt that AI-enabled tools were more convenient.

Most of the research on the use of AI in the hiring process including our own has focused on recruitment, however, and the use of the technology is expected to double over the next two years. Weve found it to be effective for companies, so it seems logical that it can be very useful for job candidates as well. In fact, at least US$2 billion in investments are fueling human resources startups aimed at using AI to help job candidates, according to our analysis of Crunchbase business data.

While more research is needed to determine exactly how effective these AI-enabled tools actually are, Americans who lost their jobs due to the coronavirus could use all the help they can get.

Read this article:
Lost your job due to coronavirus? Artificial intelligence could be your best friend in finding a new one - The Conversation US

Read More..

Migrating applications to cloud with Amazon EventBridge – Security Boulevard

So whats a monolithic application anyway? Its essentially an application where almost every single piece of functionality has been written in the code by its developers and is typically built to run as a single unit on a single server. A typical example is WordPress, by far the main model in the past, and many applications are still developed this way today.

Monolithic applications can come with a number of issues. For instance, they typically run onsnowflake servers,fragile servers that keep system administrators awake at night in the fear that they might experience some sort of problem.

What is most likely considered the main issue with monolithic apps is that additional features and bug fixes take a long time due to the tight coupling and side effects between its various components.

Developers and managers are used to dealing with such applications, which are usually good enough to fulfill all of their requirements. But there are steps you can take to make a monolithic application enter the world of the cloud, rendering it more robust and increasing its availability. It is also a great relief to no longer have to rely on a snowflake server.

Amazon EventBridge is a very useful tool in this endeavor.

In most cases, businesses will want to keep their code base, which usually works reasonably well after having been refined over many years. Performing a profound refactoring of an existing monolithic application to follow a cloud-first strategy can be very costly and time-consumingand would probably fail any cost-benefit analysis.

The good news is that you can still benefit from the cloud without too much refactoring by changing the operational context. As we will see, a monolithic application can transparently benefit from certain cloud services to increase its reliability and availability as well as enable it to scale per demand, all with little or no refactoring of the application code itself.

A typical case in point is WordPress. WordPress is a publishing and content management software that was written long before the notion of the cloud even existed. Now, you have solutions that let you integrate many cloud services with WordPress, allowing multiple instances to run in parallel and making WordPress much more reliable and available. AWS offers various reference architectures to achieve this, such as this one.

Amazon EventBridgeis a glue service offered by AWS that triggers a given action either periodically or based on events originating from other services. EventBridge offers different options for periodic triggers:

Here are some examples of event-based triggers:

The list is very, very long.

Should you decide to move your monolithic application to the AWS Cloud, Amazon EventBridge can definitely help you. The easy and obvious win is the refactoring of the context of your application, where Amazon EventBridge will typically be used for administrative or background tasks. Essentially, with little effort, you will get a lot of bang for your buck.

You can also use Amazon EventBridge as part of the applications architecture itself although this is rarely possible without some significant refactoring. One exception would be if your app uses an event-driven architecture. It would then be possible (at least in theory) to move the management of events to Amazon EventBridge, and you would benefit from a reliable, highly available service backed via the might of AWS. However, businesses will rightly question the benefit of such a move, and you should conduct a pertinent cost-benefit analysis before pursuing such a course of action.

A typical use case for Amazon EventBridge is to run background jobs periodically. This is similar to cron which is familiar to Linux users. Heres a short list of what such jobs could look like:

Amazon EventBridge can trigger jobs based on either cron expressions or the rate of repetition. In the case of a snowflake server, administrators usually create several cron jobs to perform administrative tasks. When migrating to the cloud, it might be beneficial to move such jobs to Amazon EventBridge. This would decouple the application itself from the administrative tasks of the server its running on by moving those tasks to Amazon EventBridge, which is a serverless service.

The other main use case for Amazon EventBridge is to perform a given action when a certain event takes place inside your AWS account. Examples of such events are:

You can then perform pre-programmed actions, such as executing a Lambda functionor posting a message to an SNS topic.

For a traditional monolithic application, there are often a number of scripts and mini-services that perform administrative tasks or background jobs. When migrating to AWS to modernize such a setup, refactoring those admin scripts to use AWS serverless services, such as Amazon EventBridge, usually makes sense from a cost-benefit perspective. Generally speaking, moving to a serverless service means a lot less work and worry for you and your system administrators, as this is all done by AWS.

Amazon EventBridge can also capture events generated by third-party vendors, such as Thundra.

The final main use case for Amazon EventBridge is to generate your own events using event buses. EventBridge actually allows you to create your own event buses and post your own custom events. Then, you can respond to those events in exactly the same way.

This can be useful to manage admin or background tasks, but this feature is typically used as part of an application architecture. If your monolithic application is event-based, you might consider using Amazon EventBridge as the event bus. Then, the required refactoring should be pretty minimal, provided you encapsulated all interactions with this event bus in a separate library.

It should be noted that Amazon EventBridge is not meant to be real-time, although its latency is quite smallabout half a second between posting an event and responding to it on average. Still, other options might be better suited to your needs, such as Amazon SQSor Amazon MQ. It really depends on your applications architecture, and such a discussion would be outside the scope of this article.

In conclusion, Amazon EventBridge is a very versatile event bus that can handle periodic events, events from other AWS services, and events from third-parties (such as Thundra). When migrating a monolithic application to AWS, it can definitely be useful as part of a serverless strategy to implement all the administrative tasks required for your application.

Amazon EventBridge even allows you to implement your own custom events on dedicated event buses, which can help you with refactoring both admin tasks and the application itself. As a serverless service, it also has the tremendous advantage of freeing you from all menial tasks related to server maintenance. Finally, although not real-time, Amazon EventBridge is reasonably fast with an average latency of half a second; plus, it offers an SLA of 99.99%.

There are some concerns around observability with events coming in and out of EventBridge. Tracing the events over Eventbridge is still required to achieve the required visibility when you need to debug the issues. Thundra is the only vendor out therethat provides tracing EventBridge calls that trigger Lambda functions, ECS and Fargate tasks.

At the end of the day, this is definitely a tool you want to consider when migrating a monolithic application to AWS.

Read the rest here:
Migrating applications to cloud with Amazon EventBridge - Security Boulevard

Read More..

Thycotic Releases Privileged Access Management Capabilities for the New Reality of Cloud and Remote Work – PRNewswire

WASHINGTON, Aug. 11, 2020 /PRNewswire/ --Thycotic, a provider of privileged access management (PAM) solutions for more than 10,000 organizations worldwide, including 25 percent of the Fortune 100, today announced the latest release of its award-winning PAM solution, Secret Server. New capabilities enable organizations to simplify security of modern IT environments that include multiple cloud instances, remote and third-party workers, and mobile users. A streamlined user experience makes oversight of privileged accounts and activities easier and more consistent.

"It's challenging for IT admins and security teams to manage an increasingly diverse IT environment in a consistent way," says Jai Dargan, Thycotic Vice President of Product Management. "Every part of this release is designed to help customers simplify management so their work is scalable, repeatable, and saves time."

Increased cloud visibility and control over multiple platforms

Over 75 percentof organizations use multiple cloud platforms. In addition to Secret Server's existing discovery capabilities for AWS, the latest version allows IT teams to manage Google Cloud and Azure with consistent PAM policies and practices.

Faster implementation and improved security for remote and distributed teams

Managing a large-scale remote workforce is now an expected part of IT operations. The latest release provides flexible options for IT teams to meet both the productivity and security requirements of remote work.

Organizations can test drive the latest version of Thycotic Secret Server for free athttps://thycotic.com/products/secret-server/.

About Thycotic

Thycotic is the leading provider of cloud-ready privilege management solutions. Thycotic's security tools empower over 10,000 organizations, from small businesses to the Fortune 100, to limit privileged account risk, implement least privilege policies, control applications, and demonstrate compliance. Thycotic makes enterprise-level privilege management accessible for everyone by eliminating dependency on overly complex security tools and prioritizing productivity, flexibility and control. Headquartered in Washington, DC, Thycotic operates worldwide with offices in the UK and Australia. For more information, please visitwww.thycotic.com.

Media Contact:Allison ArvanitisLumina CommunicationsT: 910-690-9482 E: [emailprotected]

SOURCE Thycotic

http://www.thycotic.com

Read the rest here:
Thycotic Releases Privileged Access Management Capabilities for the New Reality of Cloud and Remote Work - PRNewswire

Read More..

The Great Cloud-Quake: US Told to Stop Spying, or Forfeit Right of Access to Personal Data – Computer Business Review

Add to favorites

We are still waiting for an interpretation and ruling by the local DPAs in France and Germany as well as the ICO in the UK. However the logic is fairly clear

Twice the USA has signed data sharing treaties with the EU, called Safe Harbor and Privacy Shield, in which each side promised to respect the privacy of personal data shared by the other. Unfortunately, while Europeans see privacy as a human right, America sees national security as a greater priority, writes Bill Mew, Founder and CEO, The Crisis Team. Consequently, while the EU has abided by its privacy obligations under the treaties and introduced GDPR to enhance protection, the US has taken a series of actions to increase mass surveillance at the expense of privacy, thus undermining its treaty obligations.

Examples of these actions would be:

Politicians were keen not to rock the boat and therefore during annual reviews of Privacy Shield, the Europeans expressed their concerns, but avoided taking action against the USA. This shadow dance came to an end recently when Privacy Shield was struck down by the EU courts, and restrictions were imposed on the use of Standard Contractual Clauses (SCCs) the only other legal mechanism for data sharing across the Atlantic.

We are still waiting for an interpretation and ruling by the local DPAs in France and Germany as well as the ICO in the UK. However the logic is fairly clear:

We have already seen guidance issued by the Cloud Services for Criminal Justice Organisations (Police, Courts, CPS, Prisons/MoJ, etc.) and these guys know their law.

It states that MS Teams cannot be used LAWFULLY for discussion/sharing of any personal data and that this also applies to any other Cloud Service hosted in or on Azure, AWS or GCP) for any OTHER type of discussion /sharing (ie. processing) of any personal data.This guidance, if extended across the rest of the public and private sector (as it should be), will impact all use of everything from Gmail and Office 365 to Salesforce, LinkedIn and Facebook.

How do we get around this:

You have different data types:

Possible solutions:

You can continue to use the big US cloud providers for (A) and (B), while using a local cloud provider for (C) within country. This would entail a data management overhead ensuring ongoing compliance across any such multi-cloud environment.

Alternatively you could migrate (A), (B) and (C) to a local player that offers a sufficient variety of services at scale. Unfortunately few regional players have adequate scale or an international presence to support you across multiple nations and regions, and if they have operations in the USA then theyd potentially fall under FISA 702 themselves.

A few players, such as OVHcloud, saw this situation coming and structured themselves in such a manner as to have operations in the EU and US that are separate from one another. As Forrester recently noted, this enables OVHcloud to offer unified services at scale within a CLOUD Act-free European environment. The ruling also provides a shot in the arm for the recent GAIA-X European cloud initiative.

All eyes are now on the ICO though: to see what their guidance is and what kind of fudge they seek to sell us, but the ruling is fairly clear and provides them with little room for maneuver.

Are you a CDO/counsel/data protection specialist? Do you agree/disagree with Bills view? Let us know by emailing our editor

Excerpt from:
The Great Cloud-Quake: US Told to Stop Spying, or Forfeit Right of Access to Personal Data - Computer Business Review

Read More..

Can The EU Create Its Own Cloud Platform? – Forbes

The EU is forming an alternative to US and Chinese cloud platforms called Gaia X. This effort will fail on so many fronts. It reminds me of Australias National Broadband Network (NBN) which still struggles for viability after spending an estimated $51 billion.

An idea for a new cloud platform

This CRN article reports: According to Germany's Federal Ministry for Economic Affairs and Energy, the Gaia-X cloud computing platform is expected to be ready to launch in early 2021. That would be a remarkable time frame although admittedly you can assemble a couple of racks of bare metal servers and run virtualized services on them in short order. But can you create the equivalent of AWS? Never.

Just look at the relative size of the major cloud providers. The combined market cap of the four largest cloud companies, Amazon, Microsoft, Google, and Alibaba is $4.8 trillion (1.569+1.578+1.001+.685). For comparison the GDP of the largest member of the EU, Germany, is $3.9 trillion. (I know, false equivalence, but I dont know how to calculate a market cap for a country.)

Admittedly, Airbus, a similar venture partnership between government and industry, has succeeded in creating and supporting an aerospace industry in Europe. It has not been a commercial success of course. One can make the argument that having a viable aerospace industry is critical to national security and therefore creating and operating a money losing business is still worth it. Can the same argument be made on the grounds of data privacy? I would argue no, especially when the real purpose is actually the opposite.

The era of digital mercantilismor, as the East West Institute calls it, Tech Nationalismwas ushered in after Edward Snowden revealed the extent of the NSAs digital tentacles as it reached into as many data sources as it could to collect everything. The blowback was predictable and is destined to harm the US dominance of the technology sector. Also revealed by Snowden was the vast partnerships between the NSA, the rest of the Five Eyes, and Sweden, Germany, and others. They too were beneficiaries of the NSAs systematic Hoovering of the worlds data.

The EU General Data Protection Act (GDPR) was crafted and enacted in the wake of Snowdens revelations. But note the carve out in GDPR for law enforcement data records and government agencies. Lets face it. Every intelligence agency wants to emulate the US and not be beholden to the NSA for favors in exchange for being able to tap into its data stores in Utah.

The three tech giants that own most of the cloud platform business in the US are rabidly competitive. Yes, we dont know the full extent of their relationship with the Intelligence Community. There is even a mechanism which, in the hands of an overly aggressive regime, could be abused: that of national security letters whereby the subject of a demand for data cannot even reveal the existence of the letter. But their business would be drastically harmed if they were discovered to be providing backdoors to the FBI or NSA and they resist such efforts with lobbying and teams of lawyers.

Organizations in the EU should be as leery of working with the US cloud providers as they would be with Chinese cloud providers. But there is an argument to be made against having a domestic cloud platform. Your own government, which has much more interest in your data than a foreign government does, could have unfettered access to your data. From a privacy perspective the people with the power to abuse your private data are your own government, not China.

The answer is not to trust any cloud provider. This is what the term zero-trust meant originally. You encrypt all of your data before it goes to the cloud and you protect the encryption keys with multiple layers of defense. Do the job right and you will know when a government agency wants your data. They will demand the keys or, if it is a foreign agency, they will attempt to steal your keys.

Visit link:
Can The EU Create Its Own Cloud Platform? - Forbes

Read More..

Going Serverless with AWS Lambda and API Gateway – Dice Insights

One of the goals of cloud computing is to minimize the number of resources you need. This isnt just a matter of eliminating any on-premises servers and drives; cloud-based serverless computing is when you create software that runs only as needed on servers dynamically allocated by the cloud provider (also as needed).

On AWS, you can accomplish serverless programming usingAWS Lambda, a service whereby you upload code in the form of functions that are triggered based on different events that you can configure. One such event is a call coming in through AWS API Gateway, which is the AWS service you use for setting up REST API services. As REST calls come in, the API Gateway can trigger your Lambda code to run. The Lambda code will receive information on the REST call that was made, and your code can respond accordingly.

Youll need to create a new Lambda function; in this example, well use Node.js. Head over to the Lambda console (from the management console, click the Services dropdown and click Lambda), and click the Create Function button; on the next screen, click Author From Scratch and fill in the details, including a name for your function (such as TestAPIServerless) and a version of node.js you want to use (probably the highest version listed).

For this sample, you dont need to set permissions. By default, the permissions will allow Lambda to upload logs to CloudWatch. Then, click Create Function.

The next screen allows you to enter code into a browser-based IDE and test the code. Well just have the code echo back information about the API call itself. In the IDE, replace the starter sample code with this code:

Now scroll up and click the orange Save button. Thats all there is to the Lambda function.

Head over to the API Gateway by clicking the Services dropdown and then API Gateway. In the API Gateway console, scroll down and find REST API, and click Build.

On the next screen, under Choose the Protocol, select REST. Under Create New API, select New API. Under Settings, provide a name such as MyTestAPI. You can fill in Description if you like; for Endpoint Type, choose Regional. Then click Create API.

This creates an empty API. Now youll need to add a couple of endpoint methods. Well start with the root endpoint, which is the default. Click the Actions dropdown, then click Create Method. In the dropdown below that, click GET so we can respond to incoming HTTP GET requests. Click the grey checkmark.

In the next screen, for Integration type, select Lambda Function. Check the Use Lambda Proxy Integration box; this configures the calls with additional data that simplifies the data coming into your Lambda function. In the Lambda Function box, type TestAPIServerless. Then click Save. A message will popup requesting permission for API Gateway to invoke your Lambda function; click OK.

For the next one, well create an API endpoint called /data. Each endpoint is a resource, so in the Actions dropdown, click Create Resource. In the screen that follows, next to Resource Name, type Data. Notice the Resource Path fills in automatically with /data. Now click Create Resource.

Now you have two endpoints; the root with just a slash, and /data. Click /data on the left and follow the same steps as above for adding a Get method. Were calling the same Lambda function, so all the steps will be the same, starting with clicking Create Method through clicking Save.

Deploy and Test the API

From the Actions dropdown, click Deploy API. In the pop-up window, for Deployment Stage, click New Stage. Under that, call the Stage Name APITestStage. You can leave the other fields blank. Click Deploy.

The API gateway provides various ways of testing, which you can explore in the documentation. But for now you can just call your API method using any tool you like, including curl or the browser itself. The URL youll use is displayed right at the top of the screen in a blue box; it will look something like this:

Right-click the URL and open in a new browser tab. Youll see a simple JSON response:

Now add /data?abc=10&def=20 to the end of the URL to invoke the /data endpoint with a couple of query parameters. Press Enter and youll see information about the path and parameters:

Thats it! You now have a serverless API that responds to two different endpoints.

In developing your API, youll likely want to use your own domain rather than long, default URLs.You can read about it in the official docs.

Also, if youre determined to remain serverless, you can make use of managed database services to provide the database backend to your software;heres sample code for accessing AWS RDS. As you bring the pieces together, youll see that you have a complete REST API fully hosted on AWS, but without the trouble of allocating any servers. (And as you develop your AWS-related knowledge, if youre interested in machine learning via Amazons cloud,check out how to get started with SageMaker and more.)

Membership has its benefits. Sign up for a free Dice profile, add your resume, discover great career insights and set your tech career in motion. Register now

Originally posted here:
Going Serverless with AWS Lambda and API Gateway - Dice Insights

Read More..

Jack in the Box Goes All-In on AWS – Business Wire

SEATTLE--(BUSINESS WIRE)--Today, Amazon Web Services, Inc. (AWS), an Amazon.com company (NASDAQ: AMZN), announced that Jack in the Box, Inc. (NASDAQ: JACK) is running its infrastructure on AWS going all-in on the worlds leading cloud to drive flexibility and resiliency across its organization and enable the company to offer new cloud-based experiences for the more than half a billion customers who visit its restaurants every year. Jack in the Box migrated from its on-premises data centers to AWS, improving the performance and reliability of its IT infrastructure, and positioning the company to better serve customers in the digital age.

Jack in the Box chose AWS for its proven expertise in supporting the restaurant and hospitality industry, its comprehensive set of cloud services, and its scalable infrastructure, which are enabling the company to improve operational efficiency throughout the business. For example, by moving off of Oracle and Microsoft SQL Server legacy databases to Amazon Relational Database Service (RDS) and Amazon Redshift for data warehousing, Jack in the Box was able to automate time-consuming IT administration tasks such as hardware provisioning, database setup, patching, and backups, as well as cut software and hosting costs. Jack in the Box and its franchise restaurant operators now use a common operational dashboard powered by AWS to analyze sales, inventory, food safety, and labor patterns, enabling them to focus on the daily performance of the business rather than the undifferentiated heavy lifting of backing up their servers and other tasks.

Jack in the Box is also leveraging the worlds leading cloud to innovate enhanced digital ordering, dining, and customer service experiences for its guests. Going all-in on AWS means Jack in the Box can leverage AWSs broad portfolio of machine learning (ML) services to reveal more powerful insights into its customers tastes and habits. This includes Contact Lens with Amazon Connect, a set of ML capabilities integrated into a cloud-based contact center service, to enable customer service to better understand customer conversation sentiment and trends. Jack in the Box will look to use these insights to more effectively train its customer service agents, replicate successful interactions, and identify product feedback that can be used to develop new menu offerings and promotions.

Moving forward, Jack in the Box is actively planning future innovations, including helping its restaurants more accurately predict customer traffic and optimize service time and food costs using Amazon SageMaker, AWSs service for building, training, and deploying ML models. In addition, in response to increased use of its mobile app and delivery service as customers altered their routines this year, Jack in the Box also plans to use Amazon Personalize, an AWS ML service for creating individualized recommendations, to present customers with tailored suggestions on new food and beverage options and add-ons.

As one of the nations first hamburger chains, we pride ourselves on being a leader in fast food innovation, offering customers creative new menu items and the ability to customize their meals. By using AWSs full portfolio of cloud services, we can continue to innovate new customer experiences while providing valuable information to our franchisees to help them operate more efficiently, said Drew Martin, Chief Information Officer, Jack in the Box. AWS gives us the ability to be a more flexible, resilient, and data-driven organization, which is essential for our business to understand and adjust to the impacts of challenges such as COVID-19, flexibly scaling and contracting our resources to optimize how we operate.

Jack in the Box has been delighting restaurant goers for nearly 70 years. Now, with AWSs proven infrastructure and deep portfolio of services powering their IT operations, they are able to expand their use of digital channels like online ordering and delivery apps to continue earning the loyalty of future generations, said Greg Pearson, Vice President, Worldwide Commercial Sales at Amazon Web Services, Inc. By going all-in on AWS, Jack in the Box can spend most of their time innovating versus having to navigate multiple platforms, giving the company and its restaurant franchise owners the ability to understand their customers better and anticipate their needs, while also providing the scale and flexibility to quickly respond to changing business operating conditions.

About Amazon Web Services

For 14 years, Amazon Web Services has been the worlds most comprehensive and broadly adopted cloud platform. AWS offers over 175 fully featured services for compute, storage, databases, networking, analytics, robotics, machine learning and artificial intelligence (AI), Internet of Things (IoT), mobile, security, hybrid, virtual and augmented reality (VR and AR), media, and application development, deployment, and management from 77 Availability Zones (AZs) within 24 geographic regions, with announced plans for nine more Availability Zones and three more AWS Regions in Indonesia, Japan, and Spain. Millions of customersincluding the fastest-growing startups, largest enterprises, and leading government agenciestrust AWS to power their infrastructure, become more agile, and lower costs. To learn more about AWS, visit aws.amazon.com.

About Amazon

Amazon is guided by four principles: customer obsession rather than competitor focus, passion for invention, commitment to operational excellence, and long-term thinking. Customer reviews, 1-Click shopping, personalized recommendations, Prime, Fulfillment by Amazon, AWS, Kindle Direct Publishing, Kindle, Fire tablets, Fire TV, Amazon Echo, and Alexa are some of the products and services pioneered by Amazon. For more information, visit http://www.amazon.com/about and follow @AmazonNews.

About Jack in the Box

Jack in the Box Inc. (NASDAQ: JACK), based in San Diego, is a restaurant company that operates and franchises Jack in the Box restaurants, one of the nations largest hamburger chains, with more than 2,220 restaurants in 21 states. Known as the pioneer of all-day breakfast, and the late night category, Jack in the Box prides itself on being the curly fry in a world of regular fries. For more information on Jack in the Box, including franchising opportunities, visit http://www.jackinthebox.com. If you have media inquiries, please reach out to media@jackinthebox.com.

Read this article:
Jack in the Box Goes All-In on AWS - Business Wire

Read More..

Keeping Control Over Cloud With IPAM Sync Featured – The Fast Mode

Hybrid clouds are becoming ever more ubiquitous today. Over 90 percent of companies in Asia Pacific (excluding Japan) will rely on a mix of on-premise, public clouds and private clouds by 2021 to meet their infrastructure needs, according to a recent study by IDC.

At the same time, the opportunity for cybercrime has increased in tandem with the increased use of data that of infrastructure needs; it has also been compounded by disruptions to the technology industry - the COVID-19 pandemic being a case in point. The 2019 Singapore Cyber Landscape Report emphasised the importance of a resilient critical information infrastructure as one of its objectives to prevent significant disruptions to its economy and society.

For managing cloud infrastructures, the options are to use tools from the cloud provider, develop home-grown solutions, or to take advantage of a cloud management platform. Unfortunately, as a result, IT teams are likely to face challenges such as siloed management, limited visibility of resources, or having to manage multiple separate repositories.

Visibility is key for management

The fact remains that using multiple hosting providers for application workloads does not ease administration, operations or troubleshooting. It is a real challenge to maintain coherence between all infrastructure components located in multiple datacenters, cloud providers and IAAS solutions. When everything was hosted in a single big datacenter using a single VMware cluster it was more simple to manage, despite being distant from most infrastructure and operations teams.Therefore, having visibility from a single viewpoint is key for infrastructure management, so these teams require a central trusted repository which is accurate and up-to-date, wherever the workloads are running. This can be best provided by an IPAM (IP Address Management) solution. The repository helps not only simple management activities, but also automated network tasks and for handling more advanced requirements from business teams, such as auditing or security orchestration.

Why ongoing synchronization matters

Ensuring the central repository remains accurate requires Cloud IPAM Sync i.e. information to be synchronized in near real time between the Clouds and the IPAM. This ensures that the repository of information is kept up-to-date - a feature of growing importance in our fast-developing world of technology. Synchronization is an ongoing process that browses the Azure and AWS resources to find new ones that will be created in the IPAM, old ones that will be suppressed and existing ones that will eventually be updated. During the synchronization process, network automation linked to creation or destruction of a subnet object or an IP address will be automatically triggered. This enables pushing of the information to other systems such as billing, accounting, security or auditing. It also avoids tier systems performing their own discovery of resources inside multiple cloud environments and makes them use the IPAM as the central repository of information, the single source of network truth. Getting access to cloud inventory requires credentials, so centralizing the usage of these credentials does not compromise security in any way.

A cost control system requiring accurate knowledge of the amount of resources in a cloud is an example. Cloud solutions are proposing pay-as-you-go billing features and therefore are difficult to control. Generally 40% of IaaS servers in IaaS environments are not used for production and mostly utilized during working hours. As a result, default pay-as-you-go solutions for controlling resources are inefficient. Since the IPAM provides comprehensive visibility of all the resources currently running in various tenants, a comprehensive business analytics dashboard can track the evolution of running resources. This can then be correlated to the overall billing system resulting in significant cost savings over time.

Extend discovery to multi-cloud, include apps and devices

A robust solution is especially beneficial for the agility, reliability and security of cloud infrastructure for internal cloud resource inventory and network automation. If the Cloud IPAM Sync enhances the discovery process beyond internal datacenter boundaries, the IPAM can be considered as the central and unique source of truth for any IP-related information, particularly if applications and devices are included.

It is not an over-statement to say that advanced Cloud IPAM Sync functionality will benefit businesses in overcoming their hybrid cloud challenges. The single viewpoint visibility, unified management and control afforded through a central repository is key to helping businesses tide the wave of uncertainty and manage future challenges.

Read more here:
Keeping Control Over Cloud With IPAM Sync Featured - The Fast Mode

Read More..

Park Place Technologies Introduces DMSO, a New Industry Category that Elevates IT Infrastructure Services to Accelerate Business Transformation -…

CLEVELAND, Aug. 11, 2020 /PRNewswire/ --Park Place Technologies, a digital infrastructure management company that simplifies the administration of complex technology environments worldwide, today introduced DMSO, a fully integrated approach to managing critical infrastructure. DMSO is a simplified and automated approach to Discovering, Monitoring, Supporting and Optimizing digital infrastructures to maximize uptime, create cost efficiencies, enable greater infrastructure control and visibility, and enhance asset performance. The DMSO market is expected to be $228 billion annually by 2023.

As businesses continue their digital transformations, they depend on data that resides on-premises, in public and private clouds, devices at the edge and networks and operation centers that span the globe.Managing these complex environments is increasingly becoming more difficult. Exponential increases in time, labor and cost, as well as the complexity of navigating a maze of service providers to establish clear accountability and support, requires a more intelligent and flexible approach. With DMSO, Park Place clients will maximize uptime, improve operational speed, eliminate IT chaos, and boost return on investment ultimately accelerating their digital transformations.

"Data centers have changed, and the concept of infrastructure continues to evolve radically as businesses move to implement digital transformation in its many forms," said Chris Adams, CEO of Park Place Technologies. "This requires a more strategic approach to maintain physical and virtual infrastructures and gain insights through automation and analytics. This is the genesis of DMSO and we are confident that it represents a new way to deliver value and help transform critical infrastructure into a strategic business asset."

Defining DMSO Park Place Technologies, in consultation with industry analysts and Park Place customers, leveraged three decades of insight gained from providing global hardware maintenance for 17,000 customers in 58,000 data centers across 150 countries. Park Place has an impeccable record, delivering a 97 percent first-time fix rate and a 31 percent faster mean time to repair (MTTR) and carries a 97 percent customer satisfaction rate. This experience fueled the innovation that developed DMSO to provide comprehensive infrastructure control and visibility. Through a single pane of glass, DMSO will offer a view up and down the technology stack, including hardware, operating systems, networks, databases, applications, and the cloud, for customers to:

Uniquely Positioned to Deliver on the Promise of DMSO Leveraging Global InfrastructurePark Place Technologies' aggregated service delivery platform monitors and remediates hardware, networks, operating systems and applications. Recent strategic acquisitions, such as the network operations center of IntelliNet, and global network monitoring service Entuity, add new depth and breadth and demonstrate a commitment to advancing DMSO and the future of digital infrastructure. These are in addition to the dozen other acquisitions made in the US, UK, Latin America and APAC over the last few years.

The acquired technologies dovetail with and strengthen ParkView, which delivers an automated monitoring service and will extend beyond the hardware layer into software to include both operating systems and virtual servers, furthering the company's DMSO capabilities. Together with a commitment to continue to add expertise and presence around the world, Park Place Technologies is uniquely suited to advance the DMSO category for the future of digital infrastructure.

An Opportunity Underpinned by Healthy GrowthDemand for DMSO is fueled by a healthy and growing infrastructure market, estimated by industry analysts to reach $228 billion by 2023 (inclusive of dedicated and shared equipment and services).Additionally, the market for data center and network maintenance is expected to exceed$185 billion annually.

"In this digital era, it is imperative that companies put an emphasis on fixing problems before they happen," said Rob Brothers, program vice president, datacenter and support services, IDC. "This new approach to infrastructure management will enable providers like Park Place Technologies to be proactive about identifying and correcting potential problems for customers before they result in potential downtime which could cost them money."

Information technology decision makers agree. A recent survey found that 35 percent cannot seamlessly monitor and optimize cloud capacity and configurations, and 36 percent are missing single-source visibility and monitoring. The issue of a lack of in-house expertise to act and respond to performance alerts and alarms affected 39 percent of respondents.

"DMSO is something which is a positive for the industry," said Paul Alexander, Head of Technical Services, STEM Faculty, The Open University. "Park Place is able to lead on that because they've defined it. They understand where the industry is going. Obviously there's a lot going into the cloud, and in some cases, it's going to be a hybrid. I feel like the industry needed to find a new direction and DMSO is an evolution.

"It's very clear a lot of people who run data centers don't know what equipment they have. So the first problem that you need to solve on the road map is discovery, and that's key as part of DMSO. Once you discovered it, you need monitoring. And if these things were integrated well, like through ParkView, that's a winning solution. I think then the natural progression from that is to support. Optimization totally goes hand-in-hand from there, and it covers a multitude of different platforms. I think the industry as a whole is likely to move towards DMSO."

About Park Place TechnologiesPark Place Technologies simplifies the management of complex technology environments worldwide. Our network of parts to support data centers is stored regionally, locally and on-site to allow for fast parts distribution and service to drive Uptime. Park Place created a new technology service category Discover, Monitor, Support, Optimize (DMSO) a fully integrated approach to managing critical infrastructure. Our industry-leading and award-winning services include ParkView Managed Services, Entuity software, and our Enterprise Operations Center.For more information, visit us at http://www.parkplacetechnologies.com.

MEDIA CONTACTSJennifer Deutsch,Chief Marketing Officer[emailprotected]440-991-3105

Michael Miller,Global Content and Communications Manager[emailprotected]440-683-9426

SOURCE Park Place Technologies

Homepage

Excerpt from:
Park Place Technologies Introduces DMSO, a New Industry Category that Elevates IT Infrastructure Services to Accelerate Business Transformation -...

Read More..