Page 2,628«..1020..2,6272,6282,6292,630..2,6402,650..»

Gradient AI and Merlinos Team to offer State-of-the-Art Artificial Intelligence and Expert Actuarial and Insurance Industry Consulting Services -…

BOSTON--(BUSINESS WIRE)--Gradient AI, the leading enterprise software provider of artificial intelligence solutions in the insurance industry, recently announced that it has partnered with Merlinos & Associates to incorporate state-of-the-art Artificial Intelligence (AI) and Machine Learning (ML) solutions with expert actuarial and insurance industry consulting services.

Layering Merlinos & Associates' expert consulting services on top of Gradients high-precision models accelerates an insurance organizations path to implementation, and more importantly accelerates their path to optimization and achieving measurable return on their investment.

Gradient AI and Merlinos & Associates are especially excited to provide this joint offering which leverages both organizations' deep subject matter expertise in the PEO industry in order to provide the most comprehensive and holistic predictive risk management solution offering in the industry built specifically for PEOs and other risk sharing organizations.

Speed and accuracy in both risk assessment and pricing have become paramount for insurance companies, MGUs, and Professional Employer Organizations (PEOs), said Stan Smith, Founder & CEO, Gradient AI. Were excited about the combination of our AI/ML predictive analytics solutions with Merlinos industry leading actuarial expertise and operationalization capabilities as this will facilitate improved decision-making, faster responses and measurable improvements for our mutual clients.

The consultants at Merlinos & Associates, through their actuarial, modeling, and industry experience, help risk takers in the insurance industry to maximize the value they can derive when deploying Gradients AI predictions within their underwriting and claims operations. The combined expertise of Gradient and Merlinos delivers industry leading planning, operations, deployment, and ongoing measurement of an organizations results based on the clients use of AI.

"We became aware of Gradient AI through our consulting work in the PEO industry, and we quickly realized that there was a great match between their skill set and ours," says Paul Merlino, President of Merlinos & Associates. "We are delighted to team with Gradient and help to expand the use of their tools to the insurance industry."

About Gradient AI

Gradients artificial intelligence solutions help risk takers in the insurance industry automate and improve underwriting results, reduce claim costs, and improve operational efficiencies. The Gradient software-as-a-service (SaaS) platform boasts a proprietary dataset comprised of tens of millions of claims, which is complemented with dozens of economic, health, geographic and demographic datasets. This robust aggregation of data can provide demonstrable value for both underwriting and claims clients, across all major lines of insurance, and are utilized by many of the most recognized insurance carriers, MGAs, TPAs, pools, PEOs, and more. Gradient focuses exclusively on delivering measurable results for their clients. To learn more about Gradient, please visit: https://www.gradientai.com.

About Merlinos & Associates

Merlinos & Associates delivers traditional actuarial services to a wide range of domestic and international clients, including primary insurers, reinsurers, municipalities, state insurance departments, law firms, examination firms, audit firms, MGAs, PEOs, self-insured entities and groups, captives, and risk retention groups. Merlinos & Associates handle virtually all lines of property, casualty, and health insurance. In addition, Merlinos & Associates offer a wide range of expanded services, including predictive analytics, monitoring and evaluation of financial condition of insurers, actuarial feasibility studies, self-insurance & risk management strategies, and much more. To learn more about Merlinos & Associates, please visit: http://merlinosinc.com/.

The rest is here:
Gradient AI and Merlinos Team to offer State-of-the-Art Artificial Intelligence and Expert Actuarial and Insurance Industry Consulting Services -...

Read More..

Unlocking the power of data with artificial intelligence – TechRadar

Data is the lifeblood of business it drives innovation and enhances competitiveness. However, its importance was brought to the fore by the pandemic as lockdowns and social distancing drove digital transformation like never before.

About the author

Andrew Brown, General Manager, Technology Group, IBM United Kingdom & Ireland.

Forward-thinking businesses have started to grasp the importance of their data; they understand the consequences of not fully mobilizing it, but many are sat at the start of their journey.

Even the best organizations are failing to extract the maximum benefits from their data while keeping it safe. This is where artificial intelligence (AI) comes into play it can benefit enterprises with their data in three fundamental ways.

First, without the right tools it is impossible to unlock datas hidden value. For that to happen businesses need to deploy AI because of its ability to analyze complex datasets and produce actionable insights. These can significantly enhance business agility and improve the foresight of enterprises of all sizes.

The success of any move to adopt AI will depend on a robust IT infrastructure being in place. Transforming data into useful information is only possible with this solid foundation, which in turn allows advanced AI applications to extract the real value locked inside the data.

During the first wave of the pandemic, IBM worked with The Royal Marsden, a world-leading cancer hospital, to launch an AI virtual assistant to alleviate some of the pressures and uncertainty for staff associated with COVID-19. The system depended on fast access to trusted information from various diverse sources, such as the hospitals official policy handbook as well as data from NHS England. By tapping into these rich knowledge sources, staff were able to get quicker answers to workplace queries while the HR team had more time to handle complex requests.

Another issue is that far too many businesses simply dont know how much data they own. Split up into silos, it can be impossible to gain a clear view of not only what data is available but also where it resides. Removing this bottleneck can also be achieved through the implementation of AI. This is important because incomplete data will result in incomplete insights.

Businesses should prioritize making all data sources as simple and accessible as possible. Cloud computing technologies, such as hybrid data management, have a vital role to play here. Adoption makes it possible to manage all data types across multiple sources and locations, effectively breaking down these silos and a major barrier to AI adoption.

IBM has partnered with Wimbledon for more than 30 years, helping the worlds leading tennis tournament get the most from its data. Tapping into a wealth of new and archived footage, player data and historical records, fans can now benefit from personalized recommendations and highlights reels. Created through a rules-based recommendation engine integrated across Wimbledons digital platforms, this personalized content allows fans to track their favorite players through the tournament as well as receive suggestions on emerging talent to follow.

This is all made possible by the hybrid cloud the data spans a combination of on-premises systems, private clouds, and public cloud. Breaking down these silos has allowed Wimbledon to innovate at pace to attract new global audiences.

While extracting value from data is undoubtedly beneficial for organizations, it also creates risks. Criminals are increasingly aware of the potential to exploit vulnerabilities to disrupt operations or cause reputational issues through leaking sensitive data. The threat landscape is evolving and rising data breach costs are a growing problem for businesses in the wake of the rapid technology shifts triggered by the pandemic.

Over the last year businesses were forced to quickly adapt their technology approaches, with many companies encouraging or requiring employees to work from home, and 60% of organizations moved further into cloud-based activities during the pandemic.

According to the latest annual Cost of a Data Breach report, conducted by Ponemon Institute and analyzed by IBM Security, serious security incidents now cost UK-based organizations an average of $4.67 million (around 3.4 million) per incident, the highest cost in the 17-year history of the report. This is higher than the global average of $4.24 million per incident, highlighting the importance of protecting data for British businesses.

AI has a role to play here, and the study revealed encouraging signs about the impact of intelligent and automated security tools. While data breach costs reached a record high over the past year, the report also showed positive signs about the impact of modern cybersecurity tactics, such as AI and automation which may pay off by reducing the cost of these incidents further down the line.

The adoption of AI and security analytics were in the top five mitigating factors shown to reduce the cost of a breach. On average, organizations with a fully deployed security automation strategy faced data breach costs of less than half of those with no automation technology in place.

The sector in which a business operates also has a direct impact on the overall cost of a security breach. The report identified that the average cost of each compromised record containing sensitive data was highest for UK organizations in Services (191 per record), Financial (188) and Pharmaceuticals (147). This highlights how quickly the costs of a breach can escalate if a large number of records are compromised.

The Cost of a Data Breach report highlights a number of trends and best practices that were consistent with an effective response to security incidents. These can be adopted by organizations of all types and sizes and contribute to form the basis of a data management and governance strategy:

1. Invest in security orchestration, automation and response (SOAR). Security AI and automation significantly reduce the time to identify and respond to a data breach. By deploying SOAR solutions alongside your existing security tools, its possible to accelerate incident response and reduce overall costs associated with breaches.

2. Adopt a zero trust security model to help prevent unauthorized access to sensitive data. Organizations with mature zero trust deployments have far lower breach costs than those without. As businesses move to remote working and hybrid cloud environments, a zero trust strategy can help protect data by only making it accessible in the right context.

3. Stress test incident response plans to improve resilience. Forming an Incident Response team, developing a plan and putting it to the test are crucial steps to responding quickly and effectively to attacks.

4. Invest in governance, risk management and compliance. Evaluating risk and tracking compliance can help quantify the cost of a potential breach in real terms. In turn this can expedite the decision-making process and resource allocation.

5. Protect sensitive data in the cloud using policy and encryption. Data classification schema and retention policies should help minimize the volume of the sensitive information that is vulnerable to a breach. Advanced data encryption techniques should be deployed for everything that remains.

So how should a business bring its AI strategy to life? First, organizations must ensure their infrastructure is equipped to handle all the data, processing and performance requirements needed to effectively run AI. If you use your existing storage arrangement without modernizing it, you greatly increase your risk of failure. A hybrid cloud implementation is likely to be the best solution in most instances as it offers the optimum flexibility.

Enterprises should also directly embed AI into their data management and security systems, which should have clearly defined data policies to ensure appropriate levels of access and resilience. The data management system and the data architecture should be optimized for added agility and ease of operation.

A fully featured AI implementation doesnt just aggregate data and perform largescale analytics, it also enhances security and governance. Together they enable companies to create valuable business insights that fuel innovation. AI will also help ensure that data if used more efficiently and minimize data duplication. But above all, properly managed data is the lifeblood of enterprise a resource that needs to be identified and protected. Only then can companies start to climb the AI ladder.

Here is the original post:
Unlocking the power of data with artificial intelligence - TechRadar

Read More..

Museum Of Wild And Newfangled Art’s Opening Exhibition Curated By Artificial Intelligence – Broadway World

The Museum of Wild and Newfangled Art (mowna) will open their final show of the year "This Show is Curated by a Machine" on September 23, 2021.

The Artificial Intelligence curated exhibition opens with a talk on the development of the AI model followed by a Q&A with the AI Team: IV (Ivan Pravdin) and museum co-founders cari ann shim sham* and Joey Zaza. "This Show is Curated by a Machine" runs September 23, 2021 through January 31, 2022 tickets bought prior to opening day, September 23rd, include entrance to AI talk and are available at: https://www.mowna.org/museum/this-show-is-curated-by-a-machine

Earlier this year, The Whitney Museum of American Art commissioned and exhibited the work "The Next Biennial Should Be Curated by a Machine" for their online artport. In response the Museum of Wild and Newfangled Art has designed an artificial intelligence curator that will not only redefine how we look at curation and AI but will also underscore the need to move forward with AI curation in an ethical way.

The artificial intelligence model was trained on image sets from various sources, including the Museum of Modern Art, the Art Institute of Chicago, and the mowna Biennial submissions, an exhibit of around 88 International Artists from 44 countries.

"Curation is very subjective. It's my hope through the development of an AI curator that we can allow for equity and diversity, and eliminate some biases," says cari ann shim sham*.

Artists in the show include Alice Prum, a London based artist whose work explores the invisible relationships between space, the body, and technology. Bridget DeFranco is an east coast media artist working against the high-stimulation nature of the screen. Avideh Salmanpour is an Iranian artist whose paintings explore the bewilderment of contemporary man and the attempt to find a new way.

The artificial intelligence curator was created by multiple artists. IV is a post-contemporary artist working with various artificial intelligence and neural networking techniques. cari ann shim sham* is the co-founder and curator of mowna, a wild artist working at the intersection of dance and technology, and an associate arts professor of dance and technology at NYU Tisch School of the Arts. Joey Zaza is the co-founder and curator of mowna, and works in photography, software, video, sound, and installation. They combined forces to explore the potential of using artificial intelligence in art curation. The team's initial thoughts, strategies and questions in the development of the AI model can be found on mowna's blog.

Human curation is also included alongside the AI curation for "This Show is Curated by a Machine" to offer a comparison. Text written by the team will explain why or why not they think the AI chose the work. This show is a successful completion of phase one of mowna's AI model which ranks and curates a show using image based files. mowna will release a paper with its phase one research and findings to the public. With this data the team will enter into phase two development for the AI's ability to curate sound and video files.

"This Show is Curated by a Machine" will be installed and available for viewing on September 23, 2021 and marks the third online art exhibition by mowna. The second, the 2021 mowna Biennial, showcases art of all mediums and focuses on exhibiting art that might have otherwise gone unseen due to gaps in the post-pandemic art world. It is currently still on exhibit until September 22, 2021 and can be viewed on the mowna website. Tickets are a sliding scale of pay-what-you-wish.

mowna makes it their priority to showcase a broad range of art and is committed to diversity in every way. It provides an international online platform for the most timely, diverse, and preeminent artists. At the center of the constantly changing and expanding art world, mowna showcases a mixture of the familiar and unfamiliar. Members will have the opportunity to see artists who have been curated by the MoMA or the Whitney alongside artists available only on mowna.

As the global landscape shifts towards a more technological way of being, mowna is there to meet the needs of an ever-changing art world. The Museum of Wild and Newfangled Art was formed to feature the newest art developments and make art of all mediums accessible to everyone. And it unmistakably builds on that foundation with the upcoming exhibition "This Show is Curated by a Machine".

For more information on current and upcoming exhibitions and events, please visit mowna's new pages on Instagram and Facebook (below) as well as the museum's official website.

Read the original post:
Museum Of Wild And Newfangled Art's Opening Exhibition Curated By Artificial Intelligence - Broadway World

Read More..

Federal Court Rules That Artificial Intelligence Cannot Be An Inventor Under The Patent Act – JD Supra

Although this blog typically focuses on decisions rendered in intellectual property and/or antitrust cases currently in or that originated in the United States District Court for the District of Delaware or are in the Federal Circuit, every now and then there is a decision rendered in another federal trial or appellate court that is significant enough it warrants going beyond the normal boundaries. The recent decision rendered by The Honorable Leonie M. Brinkema, of the United States District Court for the Eastern District of Virginia, in Thaler v. Hirshfeld et al., Civil Action No. 1:20-cv-903-LMB (E.D.Va. September 2, 2021), is such a decision.

In Thaler, the Court confronted, analyzed and answered the question of can an artificial intelligence machine be an inventor under the Patent Act? Id. at *1. After analyzing the plain statutory language of the Patent Act and the Federal Circuit authority, the Court held that the clear answer is no. Id. In reaching its holding, the Court found that Congress intended to limit the definition of inventor to natural persons which means humans not artificial intelligence. Id. at *17. The Court noted that, [a]s technology evolves, there may come a time when artificial intelligence reaches a level of sophistication such that it might satisfy accepted meanings of inventorship. But that time has not yet arrived, and, if it does, it will be up to Congress to decide how, if at all, it wants to expand the scope of patent law. Id. at *17-18.

A copy of the Memorandum Opinion is attached.

[View source.]

Here is the original post:
Federal Court Rules That Artificial Intelligence Cannot Be An Inventor Under The Patent Act - JD Supra

Read More..

AI caramba, those neural networks are power-hungry: Counting the environmental cost of artificial intelligence – The Register

Feature The next time you ask Alexa to turn off your bedroom lights or make a computer write dodgy code, spare a thought for the planet. The back-end mechanics that make it all possible take up a lot of power, and these systems are getting hungrier.

Artificial intelligence began to gain traction in mainstream computing just over a decade ago when we worked out how to make GPUs handle the underlying calculations at scale. Now there's a machine learning algorithm for everything, but while the world marvels at the applications, some researchers are worried about the environmental expense.

One of the most frequently quoted papers on this topic, from the University of Massachusetts, analysed training costs on AI including Google's BERT natural language processing model. It found that the cost of training BERT on a GPU in carbon emissions was roughly the same as a trans-American jet flight.

Kate Saenko, associate professor of computer science at Boston University, worries that we're not doing enough to make AI more energy efficient. "The general trend in AI is going in the wrong direction for power consumption," she warns. "It's getting more expensive in terms of power to train the newer models."

The trend is exponential. Researchers associated with OpenAI wrote that the computing used to train the average model increases by a factor of 10 each year.

Most AI these days is based on machine learning (ML). This uses a neural network, which is a collection of nodes designed in layers. Each node has connections to nodes in the next. Each of these connections has a score known as a parameter or weight.

The neural network takes an input (such as a picture of a hotdog) and runs it through the layers of the neural network, each of which uses its parameters to produce an output. The final output is a judgement about the data (for example, was the original input a picture of a hotdog or not?)

Those weights don't come preconfigured. You have to calculate them. You do that by showing the network lots of labelled pictures of hot dogs and not hot dogs. You keep training it until the parameters are optimised, which means that they spit out the correct judgement for each piece of data as often as possible. The more accurate the model, the better it will be when making judgements about new data.

You don't just train an AI model once. You keep doing it, adjusting various aspects of the neural network each time to maximise the right answers. These aspects are called hyperparameters, and they include variables such as the number of neurons in each layer and the number of layers in each network. A lot of that tuning is trial and error, which can mean many training passes. Chewing through all that data is already expensive enough, but doing it repeatedly uses even more electrons.

The reason that the models are taking more power to train is that researchers are throwing more data at them to produce more accurate results, explains Lukas Biewald. He's the CEO of Weights and Biases, a company that helps AI researchers organise the training data for all these models while monitoring their compute usage.

"What's alarming about about it is that it seems like for every factor of 10 that you increase the scale of your model training, you get a better model," he says.

Yes, but the model's accuracy doesn't increase by a factor of 10. Jesse Dodge, postdoctoral researcher at the Allen Institute for AI and co-author of a paper called Green AI, notes studies pointing to the diminishing returns of throwing more data at a neural network.

So why do it?

"There's a long tail of things to learn," he explains. ML algorithms can train on the most commonly-seen data, but the edge cases the confusing examples that rarely come up are harder to optimise for.

Our hotdog recognition system might be fine until some clown comes along in a hotdog costume, or it sees a picture of a hotdog-shaped van. A language processing model might be able to understand 95 per cent of what people say, but wouldn't it be great if it could handle exotic words that hardly anyone uses? More importantly, your autonomous vehicle must be able to stop in dangerous conditions that rarely ever arise.

"A common thing that we see in machine learning is that it takes exponentially more and more data to get out into that long tail," Dodge says.

Piling on all this data data doesn't just slurp power on the compute side, points out Saenko; it also burdens other parts of the computing infrastructure. "The larger the data, the more overhead," she says. "Even transferring the data from the hard drive to the GPU memory is power intensive."

There are various attempts to mitigate this problem. It starts at the data centre level, where hyperscalers are doing their best to switch to renewables so that they can at least hammer their servers responsibly.

Another approach involves taking a more calculated approach when tweaking your hyperparameters. Weights and Biases offers a "hyperparameter sweep" service that uses Bayesian algorithms to narrow the field of potential changes with each training pass. It also offers an "early stopping" algorithm which halts a training pass early on if the optimisation isn't panning out.

Not all approaches involve fancy hardware and software footwork. Some are just about sharing. Dodge points out that researchers could amortise the carbon cost of their model training by sharing the end result. Trained models released in the public domain can be used without retraining, but people don't take enough advantage of that.

"In the AI community, we often train models and then don't release them," he says. "Or the next people that want to build on our work just rerun the experiments that we did."

Those trained models can also be fine tuned with additional data, enabling people to tweak existing optimisations for new applications without retraining the entire model from scratch.

Making training more efficient only tackles one part of the problem, and it isn't the most important part. The other side of the AI story is inference. This is when a computer runs new data through a trained model to evaluate it, recognising hotdogs it has never seen before. It still takes power, and the rapid adoption of AI is making it more of a problem. Every time you ask Siri how to cook rice properly, it uses inference power in the cloud.

One way to reduce model size is to cut down the number of parameters. AI models often use vast numbers of weights in a neural network because data scientists aren't sure which ones will be most useful. Saenko and her colleagues have researched reducing the number of parameters using a concept that they call shape shifter networks that share some of the parameters in the final model.

"You might train a much bigger network and then distil it into a smaller one so that you can deploy a smaller network and save computation and deployment at inference time," she says.

Companies are also working on hardware innovations to cope with this increased inference load. Google's Tensor Processing Units (TPUs) are tailored to handle both training and inference more efficiently, for example.

Solving the inference problem is especially tricky because we don't know where a lot of it will happen in the long term. The move to edge computing could see more inference jobs happening in lower-footprint devices rather than in the cloud. The trick there is to make the models small enough and to introduce hardware advances that will help to make local AI computation more cost-effective.

"How much do companies care about running their inference on smaller devices rather than in the cloud on GPUs?" Saenko muses. "There is not yet that much AI running standalone on edge devices to really give us some clear impetus to figure out a good strategy for that."

Still, there is movement. Apple and Qualcomm have already produced tailored silicon for inference on smart phones, and startups are becoming increasingly innovative in anticipation of edge-based inference. For example, semiconductor startup Mythic launched an AI processor focused on edge-based AI that uses analogue circuitry and in-memory computing to save power. It's targeting applications including object detection and depth estimation, which could see the chips turn up in everything from factories to surveillance cameras.

As companies grapple with whether to infer at the edge, the problem of making AI more energy efficient in the cloud remains. The key lies in resolving two opposing forces: on the one hand, everyone wants more energy efficient computing. On the other, researchers constantly strive for more accuracy.

Dodge notes that most academic AI papers today focus on the latter. Accuracy is winning out as companies strive to beat each other with better models, agrees Saenko. "It might take a lot of compute but it's worthwhile for people to claim that one or two percent improvement," she says.

She would like to see more researchers publish data on the power consumption of their models. This might inspire competition to drive efficiencies up and costs down.

The stakes may be more than just environmental, warns Biewald; they could be political too. What happens if computing consumption continues to go up by a factor of 10 each year?

"You have to buy the energy to train these models, and the only people that can realistically afford that will be Google and Microsoft and the 100 biggest corporations," he posits.

If we start seeing a growing inequality gap in AI research, with corporate interests out in front, carbon emissions could be the least of our worries.

Follow this link:
AI caramba, those neural networks are power-hungry: Counting the environmental cost of artificial intelligence - The Register

Read More..

How to keep your personal information from getting stolen – Wink News

FORT MYERS

Stolen identity can happen to anyone at any time.

In fact, a new report finds two-thirds of people will experience life-changing digital abuse.

There are a few simple things you can do to keep your personal information safe.

From widespread cyber attacks to fraudulent emails and texts, the web has many ways to grab what it needs from you. About 79% of internet users feel they have completely lost control of their personal data.

As we evolve in technology it has become more and more of an issue, said Regine Bonneau, CEO of RB Advisory and cyber security consultant.

How can you protect yourself?

First, download an identity protection system. Identity Guard, Identity Force and ID Shield rank in the top three, according to U.S. News and World Report.

Next, check your apps.

They can be used to bombard you with spam.

Some popular ones have come under fire for sharing your information.

Beware of opening weird emails to avoid phishing scams.

Finally, a simple step to keep you safe online is to update your devices regularly.

For more information:

Report identity theft and get a recovery plan Federal Trade Commission

USA.gov on identity theft

Federal Trade Commission Consumer Information

Link:
How to keep your personal information from getting stolen - Wink News

Read More..

Apple says it has fixed newly discovered iPhone vulnerability – Silicon Valley

By Christopher Bing | Reuters

A cyber surveillance company based in Israel has developed a tool that can break into Apple iPhones with a never-before-seen technique used at least since February, internet security watchdog group Citizen Lab said on Monday.

The discovery is important because of the critical nature of the vulnerability, which affects all versions of Apples iOS, OSX, and watchOS, except for those updated on Monday.

The vulnerability exploited by the Israeli firm, named NSO Group, defeats security systems designed by Apple in recent years.

Apple said it fixed the vulnerability in Mondays software update, confirming Citizen Labs finding. However, an Apple spokesperson declined to comment regarding whether the hacking technique came from NSO Group.

Citizen Lab said it found the malware on the phone of an unnamed Saudi activist, which had been infected with spyware in February. It is unknown how many other users may have been infected.

The vulnerability comes from a flaw in how iMessage automatically renders images. IMessage has been repeatedly targeted by NSO, as well as other cyber arms dealers, prompting Apple to update its architecture. But that upgrade has not fully protected the system.

The security of devices is increasingly challenged by attackers, said Citizen Lab researcher Bill Marczak.

The U.S. Cybersecurity and Infrastructure Security Agency had no immediate comment.

Original post:
Apple says it has fixed newly discovered iPhone vulnerability - Silicon Valley

Read More..

How Internet of Things Security Is Impacting Retailers – Loss Prevention Magazine

Internet of Things (IoT) security is a growing concern for retailers. IoT is one of the biggest trends in the market today, said Itzik Feiglevitch, product manager for Check Point Software Technologies at the RSA Conference in May 2021. Huge numbers of devices are expected to be added in the coming years to company networks.

And while Feiglevitch said theyre greatthey increase operational efficiency and move companies into the digital worlda retailer also needs to take into consideration that all of those IoT devices are now part of our networks, and they bring with them lots of security risks.

According to Check Points research, a typical enterprise of 5,000 employees could have as many as 20,000 IoT devices. I know it seems like a huge number, but think of all the IP TVs, printers, surveillance cameras, or the sensors inside the buildings, the smart elevators, smart lightingeverything is connected to the enterprise network.

IoT sensors are increasingly being used in retail to enhance the customer experience, such as with smart mirrors and digital signage; for insight into customer preferences and behavior; and for loyalty and promotionusing sensors to identify the time and place of the customer to better target assistance or incentives. Connected sensors are being used for managing energy and detecting equipment problems, especially in grocery, and in warehouses and stores to optimize supply and fulfillment, as with RFID and smart shelves.

The global internet of things in retail was valued at $31.99 billion in 2020 and is expected to expand at a compound annual growth rate of 26 percent from 2021 to 2028, according to market analysis by Grand View Research. IoT is expected to revamp the retail industry, transforming traditional brick and mortar shops into advanced digital stores, according to the report.

The surge in the number of interconnected devices in retail outlets and the decreasing prices of IoT sensors are expected to propel the growth. Retailers commitment to IoT innovation is contributing to the growth of connected devices, including both RFID tags and beacons and the proliferation of smartphones and the use of mobile applications are driving the retail software segment growth.

Problematically, many IoT devices are unmanaged. They are connected to our network, but we dont have any way to control those devices, to view them, and define what those devices can and cannot do inside our network, said Feiglevitch. If we go and search for those devices inside our security management system, we will not find those devices.

Most company-connected IoT devices are, in turn, connected to the wider internetto allow vendors to deliver updates, for example. Attackers, using standard scanning tools, can find those devices. They know what to look for, said Feiglevitch, noting that there are even search tools to help thema Google for IoT hackers, he said. A casual Shodan search will turn up nearly 300,000 surveillance cameras connected to the internet.

Once found, connecting to those devices, and hacking into them, tends to be quite easy, Feiglevitch warned. They often have no built-in Internet of Things security, run on legacy operating systems, have weak default passwords, and are difficult to patch. Many dont have basic security capabilities, he said. When many of those devices were developed, no one thought about that.

By accessing a device, hackers can manipulate itto view a camera, for exampleor use it, for crypto mining or as a bot for a botnet attack. It also can provide hackers a backdoor into the network because of an insecure connection. Users may not have the right knowledge about how to connect those devices, said Feiglevitch. Theyre using the wrong protocols and insecure applications, so through those devices, hackers can get into the network.

In exploitation tests, researchers have found it possible to create untold havoc, from taking over entire smart building systems to tricking medical devices to deliver incorrect doses of medicine, and while vendors typically issue patches, Feiglevitch says those often dont get implemented. Legacy, insecure devices are ubiquitous, he warned.

There are four pillars to address the risks that IoT devices pose to an organizations network, according to Justin Sowder, a security architect for Check Point.

In terms of solution design, Sowder advised that it should consist of three things: an IoT discovery engine; a solution that extracts information and ties it to security protocols; and a security gateway that enforces the security policies.

This flow should be completely automated: from a new device being connected or an existing device being discovered, to this Internet of Things security management that will extrapolate relevant data and tags to your security policies, and then down to an enforcement point, he said. It should be invisible to users, but discovery, protection, and enforcement in the security realm should nonetheless be happening, he said.

An automated solution is preferable, he believes, to a slower, more heavy-handed cyber security approach in which all new devices are assigned a ticket and vetted and managed. That only encourages shadow IT, he warned.

The need for retailers to have a robust process for gaining control over IoT devices is only growing, as IoT devices proliferate and there is increasing reliance on field devices that communicate back to network data centers. That the infrastructure used to enable IoT devices is beyond the control of both the user and the IT department underscores that risk.

Research indicates that some organizations fail to define exactly who are the leaders in charge of assessing and mitigating risk. Experts suggests that retail organizations may want to consider appointing a Chief IoT Officer since many projects lie outside of the domain of a CIO and IT department.

IoT isnt an IT project. Its a business project that uses IT, noted one panelist at an IoT session at a LiveWorx tech conference. Another agreed, saying that IT security professionals should be prepared to share Internet of Things security responsibility with other divisions across the enterprise, including physical security teams.

See more here:
How Internet of Things Security Is Impacting Retailers - Loss Prevention Magazine

Read More..

College Park’s IonQ and the University of Maryland are teaming up to open a $20M quantum lab – Technical.ly DC

This fall, the University of Maryland College Park (UMD) and College Park quantum computing company IonQ are partnering up to open the National Quantum Lab (Q-Lab), specializing in research of the technology.

Decked out with a commercial grade quantum computer and hardware from IonQ, UMD Chief Strategy Officer Ken Ulman said it will be a space for students and staff to explore solutions using quantum technology. The lab, which is being created with a $20 million investment from the school, is part of UMDs larger expansion of quantum resources at a time when scientists are moving to take the technology from the lab to commercial companies. So far, UMD has invested $300 million in quantum science, and has been working in the field on its campus for over 30 years.

Ulman told Technical.ly that UMD decided to pursue a national lab because it became apparent that quantum computing has the potential to help solve many of the worlds challenges, while also brining innovation to the local area.

We think theres an opportunity here to create, Ulman said. And we think that the Silicon Valley of X is totally overplayed and overused, but this may be one of the few times that its appropriate.

The lab, which will be located in the universitys innovation-centered development known as the Discovery District, will open next to IonQs headquarters. It will give students a chance to directly interact with IonQ employees. IonQ will also be assisting with staffing and program development within the lab, and it will serve as a collaborative workspace for students and staff.

The news coincides with IonQs move to go public, which is expected to be finalized in the next few weeks. The company, which started at a UMD lab, is said to be valued at approximately $2 billion following the IPO.

We are very proud that the nations leading center of academic excellence in quantum research chose IonQs hardware for this trailblazing partnership, said Peter Chapman, president and CEO of IonQ, in a statement. UMD has been at the vanguard of this field since quantum computing was in its infancy, and has been a true partner to IonQ as we step out of the lab and into commerce, industry, and the public markets.

Its location in the Discovery District, Ulman said, is also very intentional, because the investment in quantum is not happening in a vacuum, and it comes alongside a host of investment in the tech in and out of UMD. He hopes that the new center will help bring more innovation and investment to the area, especially given the potential reach of quantum technology. In addition to cybersecurity, he foresees applications in climate change solutions and rapid vaccine deployment, among other uses.

We believe that creating a hands-on quantum user facility that can bring those talented people from around the world to come to the University of Marylandand collaborate with the men and women at IonQ, we think its a really important step to creating the ecosystem, Ulman said.

Follow this link:
College Park's IonQ and the University of Maryland are teaming up to open a $20M quantum lab - Technical.ly DC

Read More..

Where the laws of matter break down, a quantum discovery crops up – UPJ Athletics

For decades, scientistshave been fascinated by superfluids materials under extreme conditions where the typical laws of matter break down and friction disappears entirely.

University of Pittsburgh Professor of Physics and Astronomy Vincent Liu and an international team of collaborators report the creation of a stable material that achieves long-sought-after and strange quantum properties. This topological superfluid could find use in a variety of futuristic technologies and in the meantime will provide plenty of new questions for physicists to chew on.

Its a fundamental concept that might have a very huge impact to society in its application, Liu said.

In his field of artificial materials, theres a close interplay between two kinds of physicists: Those like Liu who specialize in theory use math and physics to imagine yet-undiscovered phenomena that could be useful for futuristic technologies, and otherswhodesign experiments that use contained, simplified systems of particles to try to create materials that act in the ways theorists predicted. Its the feedback between these two groups that pushes the field forward.

Liu and his collaborators, a team composed of both theorists and experimentalists, have been pursuing a material that holds the useful properties of a superfluid regardless of shape and is also stable in the lab, a combination that has eluded researchers for years. The solution they arrived at was shining lasers in a honeycomb pattern on atoms. The way those lasers combine and cancel each other out in repeating patterns can coerce the atoms into interacting with one another in strange ways. The team published their results in Nature on Aug. 11.

To say that the experiment sits on a technical knife edge would be an understatement. It requires that atoms be kept at a temperature of around one ten-millionth of a degree above absolute zero. Its among the coolest systems on Earth, Liu said. All the while, the heat delivered by lasers makes it even more challenging to keep it cool.

Even the act of cooling the material creates its own wrinkles. The teams main trick was to use evaporation, meaning the warmest atoms fly off, but achieving a material with the right density means there also needs to be plenty of atoms remaining after evaporation. Combining just the right set of conditions is a stunning technical feat, pioneered in the lab of Lius collaborator and former postdoc Zhi-Fang Xu, a physicist at the Southern University of Science and Technology in Shenzhen, China. Another collaborator, quantum optics expert Andreas Hemmerich at the University of Hamburg in Germany, helped design the lattice of lasers that holds the atoms in place.

For the international team of physicists, that balancing act is worth it. The resulting material, the teams calculations show, is the much-sought-after topological superfluid needed to create next-generation quantum computers. But because Lius team used atoms to produce these quantum effects rather than using lighter particles like electrons orphotons, any quantum computer made from the material would be impractically slow. Instead, Liu said, it will likely be most useful for studying the finer points of how that technology might work.

Its like youre watching an NBA player in slow motion. Youre going to see all of the motion, all of the subtle physics, in a very clear way, he explained.

That more fine-tuned understanding could help researchers design quantum computers that could handle fast calculations. And the materials stability compared to other quantum materials could lend itself to other uses, like hyper-precise timekeeping and information storage.

As exciting as the discovery is, it represents only one line of Lius work as a theorist, he works with physicists across the globe to push the boundaries of different kinds of quantum materials. Besides the thrill of discovery and the mathematical beauty of the physics, Liu says its those collaborations that keep him excited about the field.

You could say the community moves as a whole, he said. If I just walked by myself, I probably wouldnt move very far.

Patrick Monahan

More:
Where the laws of matter break down, a quantum discovery crops up - UPJ Athletics

Read More..