Page 1,968«..1020..1,9671,9681,9691,970..1,9801,990..»

Watch: How Abu Dhabi is ushering in a new era of computing with state-of-the-art quantum lab – Gulf News

Abu Dhabi: At the heart of Abu Dhabis science research hub in Masdar, a new era of computing is taking shape. With massive investments towards becoming a leader in the field, Abu Dhabi could well revolutionise quantum computing when a newly-developed foundry starts churning out quantum chips this summer.

With the world of computing still undecided on which platform works best to enable, and then scale up, quantum computing, chips manufactured at the laboratory will allow important experiments into the possibilities of various material and configurations.

Quantum foundry

The laboratory is part of the Quantum Research Centre, one of a number of research interests at the Technology Innovation Institute (TII), which focuses on applied research and is part of the over-arching Advanced Technology Research Council in Abu Dhabi.

TII Quantum Foundry will be the first quantum device fabrication facility in the UAE. At the moment, it is still under construction. We are installing the last of the tools needed to manufacture superconducting quantum chips. We are hoping that it will be ready soon, and hopefully by then, we can start manufacturing the first quantum chips in the UAE, Alvaro Orgaz, lead for the quantum computing control at the TIIs Quantum Research Centre, told Gulf News.

The design of quantum chips is an area of active research at the moment. We are also interested in this. So, we will manufacture our chips and install them into our quantum refrigerators, then test them and improve on each iteration of the chip, he explained.

What is quantum computing?

Classical computers process information in bits, tiny on and off switches that are encoded in zeroes and ones. In contrast, quantum computing uses qubits as the fundamental unit of information.

Unlike classical bits, qubits can take advantage of a quantum mechanical effect called superposition where they exist as 1 and 0 at the same time. One qubit cannot always be described independently of the state of the others either, in a phenomenon called entanglement. The capacity of a quantum computer increases exponentially with the number of qubits. The efficient usage of quantum entanglement drastically enhances the capacity of a quantum computer to be able to deal with challenging problems, explained Professor Dr Jos Ignacio Latorre, chief researcher at the Quantum Research Center.

Why quantum computing?

When quantum computers were first proposed in the 1980s and 1990s, the aim was to help computing for certain complex systems such as molecules that cannot be accurately depicted with classical algorithms.

Quantum effects translate well to complex computations in some fields like pharmaceuticals, material sciences, as well as optimisation processes that are important in aviation, oil and gas, the energy sector and the financial sector. In a classical computer, you can have one configuration of zeroes and ones or another. But in a quantum system, you can have many configurations of zeroes and ones processed simultaneously in a superposition state. This is the fundamental reason why quantum computers can solve some complex computational tasks more efficiently than classical computers, said Dr Leandro Aolita, executive director of quantum algorithms at the Quantum Research Centre.

Complementing classical computing

On a basic level, this means that quantum computers will not replace classical computers; they will complement them.

There are some computational problems in which quantum computers will offer no speed-up. There are only some problems where they will be superior. So, you would not use a quantum computer which is designed for high-performance computing to write an email, the researcher explained. This is why, in addition to research, the TII is also working with industry partners to see which computational problems may translate well to quantum computing and the speed-up this may provide, once the computers are mature enough to process them.

Quantum effect fragility

At this stage, the simplest quantum computer is already operational at the QRC laboratory in Masdar City. This includes two superconducting qubit chips mounted in refrigerators at the laboratory, even though quantum systems can be created on a number of different platforms.

Here, the super conducting qubit chip is in a cooler that takes the system down to a temperature that goes down to around 10 millikelvin, which is even cooler than the temperature of outer space. You have to isolate the system from the thermal environment, but you also need to be able to insert cables to control and read the qubits. This is the most difficult challenge from an engineering and a technological perspective, especially when you scale up to a million qubits because quantum effects are so fragile. No one knows exactly the exact geometric configurations to minimise the thermal fluctuations and the noise, [and this is one of the things that testing will look into once we manufacture different iterations of quantum chip], Dr Aolita explained.

Qubit quality

The quality of the qubit is also very important, which boils down to the manufacture of a chip with superconducting current that displays quantum effects. The chips at TII are barely 2x10 millimetres in size, and at their centre is a tiny circuit known as the Josephson junction that enables the control of quantum elements.

It is also not just a matter of how many qubits you have, as the quality of the qubits matters. So, you need to have particles that preserve their quantum superposition, you need to be able to control them, have them interact the way you want, and read their state, but you also have to isolate them from the noise of the environment, he said.

Optimistic timeline

Despite these massive challenges to perfect a minute chip, Dr Aolita was also quite hopeful about the work being accomplished at TII, including discussions with industry about the possible applications of quantum computing.

I think we could see some useful quantum advantages in terms of classical computing power in three to five years, he said. [Right now], we have ideas, theories, preliminary experiments and even some prototypes. Quantum computers even exist, but they are small and not still able to outperform classical supercomputers. But this was the case with classical computing too. In the 1950s and 1940s, a computer was like an entire gym or vault. Then the transistor arrived, which revolutionised the field and miniaturised computers to much smaller regions of space that were also faster. Something similar could happen here and it really is a matter of finding which kind of qubit to use and this could ease the process a lot. My prediction for a timeline is optimistic, but not exaggerated, the researcher added.

Science research

Apart from the techonological breakthroughs, the QRCs efforts are likely to also improve Abu Dhabis status as a hub for science and research.

The UAE has a long tradition of adopting technologies and incorporating technologies bought from abroad. This is now [different in] that the government is putting a serious stake in creating and producing this technology and this creates a multiplicative effect in that young people get more enthusiastic about scientific careers. This creates more demand for universities to start new careers in physics, engineering, computer science, mathematics. This [will essentially have] a long-term, multiplicative effect on development, independent of the concrete goal or technical result of the project on the scientific environment in the country, Dr Aolita added.

The QRC team currently includes 45 people, but this will grow to 60 by the end of 2022, and perhaps to 80 people in 2023. We also want to prioritise hiring the top talent from across the world, Dr Aolita added.

Excerpt from:
Watch: How Abu Dhabi is ushering in a new era of computing with state-of-the-art quantum lab - Gulf News

Read More..

How Data Has Changed the World of HR – ADP

In this "On the Job" segment from Cheddar News, Amin Venjara, General Manager of Data Solutions at ADP, describes the importance of data and how human resources leaders are relying on real-time access to data now more than ever. Venjara offers real-world examples of data's impact on the top challenges faced by organizations today.

Businesses big and small have been utilizing the latest tech and innovation to make the new remote and hybrid working environments possible.

Speaking with Cheddar News, above, Amin Venjara (AV), says relying on quality and accessible data to take action is how today's HR teams are impacting the modern workforce.

Q: How does data influence the role of human resources (HR)?

AV: The last few years have thrust HR teams into the spotlight. Think about all the changes we've seen managing the onset of the pandemic, the return to the workplace, the great resignation and all the challenges that's brought and even the increased focus on diversity, equity and inclusion. HR has been at the focal point of responding to these challenges. And in response, we've seen an uptick in the use of workforce analytics and benchmarking. HR teams need the data to be able to help make decisions in real time as things are changing. And they're using it with the executives and managers they support to make data-driven decisions.

Q: Clearly data-driven solutions are critical in today's workforce as you've been discussing, where has data made the most significant impact?

AV: When we talk to employers, we continuously hear about four key areas related to their workforce: attracting top talent, retaining and engaging talent, optimizing labor costs, and fostering a diverse, equitable and inclusive workforce.

To give an example of the kind of impact that data can have. We have a product that helps organizations calculate and take action on pay equity. They can see gaps by gender and race ethnicity and based on internal and market data. Over 70% of active clients using this tool are seeing a decrease in pay equity gaps. If you look at the size of this - they're spending over a billion dollars to close those gaps. That's not just analytics and data - that's taking action. So, think about the impact that has on the message about equal pay for equal work. And also, the impact it has on productivity, and the lives of those individual workers and their families.

Q: In today's tight talent market, employers increasingly need help recruiting and even retaining workers. How can data and machine learning alleviate some of those very pressing challenges?

AV: Here's an interesting thing about what's happening in the current labor market. U.S. employment numbers are back to pre-pandemic levels with 150 million workers on the payroll. However, we're at the lowest unemployed workers to jobs openings rate we've seen in over 15 years. To put it simply, it's a candidate's market out there; and jobs are chasing workers.

Two things to keep in mind: employers have to employ data-driven strategies to be competitive. So we're seeing with labor markets changing, remote work, hybrid work, expectations on pay and even the physical locations of workers people have moved a lot. Employers need access to real-time data, accurate data on supply and demand of labor and on compensation to hire the right workers and keep the ones they have.

The second thing is really about the adoption of machine learning in recruiting workflows. We're seeing machine learning being adopted in chatbots for personalizing the experience and even helping with scheduling, but also AI-based algorithms to help score candidate profiles against jobs. Overall, the best organizations are combing technology and data with their recruiting and hiring managers to decrease the overall time to fill open jobs.

Q: Becoming data confident might be a concern or even perhaps intimidating for some, but what's an example of how an organization can use data well?

AV: A lot of organizations are trying to make this happen. We recently worked with a quick service restaurant with about 2,000 locations across the U.S. In light of the supply chain challenges and demographic shifts of the last couple of years, they wanted to know how to combine and optimize the supply at each location based on expected demand.

Their research enabled them to correlate demographics, things like age, income and even family status to items on the menu like salads, sandwiches and kids' meals. But what they needed was a stronger signal on what's happening in the local context of each location. They had used internal data for so long, but things had shifted. By using our monthly anonymized and aggregated data from nearly 20% of the workforce, they were able to optimize their demand forecasting models and increase their supply chain efficiency. There are two lessons to think about. They had a key strategic problem, and they worked backwards from that. That's a key piece of becoming data confident - focusing on something that matters and making a data-driven decisions about it. The second is about going beyond the four walls of your organization. There are so many different and new sources of data available due to the digitization of our economy. In order to lock the insight and the strength of signal you need you really need to look for the best sources to get there.

Q: How do you see the role of data evolving as we look toward the future of work?

AV: Data has really come the language of business right now. I see a couple of trends as we look out. The first is the acceleration of data in the flow of work. When you look at a lot of organizations today, when people need data, they have to go to a reporting group or a business intelligence group to request the data. Then it takes a couple cycles to get it right and then make a decision. The cycle time can be high.

What I expect to see now is data more and more in the flow of work where business decision makers are working immediately; they have the right data at their fingertips. You see that across domains. Second is just the separation between haves and have nots. With the increasing speed of change, data haves are going to be able to outstrip data have nots. Those who have invested in building the right organizational, technical, and cultural muscle will see the spoils of this in the years to come.

Learn more

In the post-pandemic world of work, the organizations that prioritize people first will rise to the top. Find out how to make HR more personalized to adapt to today's changing talent landscape. Get our guide: Work is personal

Tags: Compensation Diversity and Inclusion Trends and Innovation Salary and Wages Technology HCM Technology HR Recruiting and Hiring Articles

The rest is here:
How Data Has Changed the World of HR - ADP

Read More..

ClearBuds: First wireless earbuds that clear up calls using deep learning – University of Washington

Engineering | News releases | Research | Technology

July 11, 2022

ClearBuds use a novel microphone system and are one of the first machine-learning systems to operate in real time and run on a smartphone.Raymond Smith/University of Washington

As meetings shifted online during the COVID-19 lockdown, many people found that chattering roommates, garbage trucks and other loud sounds disrupted important conversations.

This experience inspired three University of Washington researchers, who were roommates during the pandemic, to develop better earbuds. To enhance the speakers voice and reduce background noise, ClearBuds use a novel microphone system and one of the first machine-learning systems to operate in real time and run on a smartphone.

The researchers presented this project June 30 at the ACM International Conference on Mobile Systems, Applications, and Services.

ClearBuds differentiate themselves from other wireless earbuds in two key ways, said co-lead author Maruchi Kim, a doctoral student in the Paul G. Allen School of Computer Science & Engineering. First, ClearBuds use a dual microphone array. Microphones in each earbud create two synchronized audio streams that provide information and allow us to spatially separate sounds coming from different directions with higher resolution. Second, the lightweight neural network further enhances the speakers voice.

While most commercial earbuds also have microphones on each earbud, only one earbud is actively sending audio to a phone at a time. With ClearBuds, each earbud sends a stream of audio to the phone. The researchers designed Bluetooth networking protocols to allow these streams to be synchronized within 70 microseconds of each other.

The teams neural network algorithm runs on the phone to process the audio streams. First it suppresses any non-voice sounds. And then it isolates and enhances any noise thats coming in at the same time from both earbuds the speakers voice.

Because the speakers voice is close by and approximately equidistant from the two earbuds, the neural network can be trained to focus on just their speech and eliminate background sounds, including other voices, said co-lead author Ishan Chatterjee, a doctoral student in the Allen School. This method is quite similar to how your own ears work. They use the time difference between sounds coming to your left and right ears to determine from which direction a sound came from.

Shown here, the ClearBuds hardware (round disk) in front of the 3D printed earbud enclosures.Raymond Smith/University of Washington

When the researchers compared ClearBuds with Apple AirPods Pro, ClearBuds performed better, achieving a higher signal-to-distortion ratio across all tests.

Its extraordinary when you consider the fact that our neural network has to run in less than 20 milliseconds on an iPhone that has a fraction of the computing power compared to a large commercial graphics card, which is typically used to run neural networks, said co-lead author Vivek Jayaram, a doctoral student in the Allen School. Thats part of the challenge we had to address in this paper: How do we take a traditional neural network and reduce its size while preserving the quality of the output?

The team also tested ClearBuds in the wild, by recording eight people reading from Project Gutenberg in noisy environments, such as a coffee shop or on a busy street. The researchers then had 37 people rate 10- to 60-second clips of these recordings. Participants rated clips that were processed through ClearBuds neural network as having the best noise suppression and the best overall listening experience.

One limitation of ClearBuds is that people have to wear both earbuds to get the noise suppression experience, the researchers said.

But the real-time communication system developed here can be useful for a variety of other applications, the team said, including smart-home speakers, tracking robot locations or search and rescue missions.

The team is currently working on making the neural network algorithms even more efficient so that they can run on the earbuds.

Additional co-authors are Ira Kemelmacher-Shlizerman, an associate professor in the Allen School; Shwetak Patel, a professor in both the Allen School and the electrical and computer engineering department; and Shyam Gollakota and Steven Seitz, both professors in the Allen School. This research was funded by The National Science Foundation and the University of Washingtons Reality Lab.

For more information, contact the team at clearbuds@cs.washington.edu.

Here is the original post:
ClearBuds: First wireless earbuds that clear up calls using deep learning - University of Washington

Read More..

C3 AI Named a Leader in AI and Machine Learning Platforms – Business Wire

REDWOOD CITY, Calif.--(BUSINESS WIRE)--C3 AI (NYSE: AI), the Enterprise AI application software company, today announced that Forrester Research has named it a Leader in AI and Machine Learning Platforms in its July 2022 report, The Forrester Wave: AI/ML Platforms, Q3 2022.

Ahead of its time, C3 AIs strategy is to make AI application-centric by building a growing library of industry solutions, forging deep industry partnerships, running in every cloud, and facilitating extreme reuse through common data models, the report states.

We are pleased to be recognized as a leader in AI and ML platforms," said Thomas Siebel, C3 AI CEO. Im delighted to see C3 AIs significant investments in enterprise AI software be acknowledged. I believe that Forrester Research has made an important contribution, having published the first professional comprehensive analysis of enterprise AI and Machine Learning platforms, Siebel continued, changing the dialogue from a focus on disjointed tools to the importance of cohesive enterprise AI platforms. This is certain to accelerate the market adoption of enterprise AI and simplify often protracted decision processes.

Of the 15 vendors in the report, C3 AI received the top ranking in the Strategy category. For the following criteria, C3 AI received:

Download The Forrester Wave: AI and Machine Learning Platforms, Q3 2022 report here.

About C3 AI

C3 AI is the Enterprise AI application software company. C3 AI delivers a family of fully integrated products including the C3 AI Suite, an end-to-end platform for developing, deploying, and operating enterprise AI applications and C3 AI Applications, a portfolio of industry-specific SaaS enterprise AI applications that enable the digital transformation of organizations globally.

Read more from the original source:
C3 AI Named a Leader in AI and Machine Learning Platforms - Business Wire

Read More..

Automated identification of hip arthroplasty implants using artificial intelligence | Scientific Reports – Nature.com

Study design and radiograph acquisition

After institutional review board approval, we retrospectively collected all radiographs taken between June 1, 2011 and Dec 1, 2020 at one university hospital. The images are collected by Neusoft PACS/RIS Version 5.5 on a personal computer running Windows 10. We confirm that all methods were performed in accordance with the relevant guidelines and regulations. Images were collected from surgeries performed by 3 fellowship-trained arthroplasty surgeons to ensure a variety of implant manufacturers and implant designs. At the time of collection, images had all identifying information removed and were thus de-identified. Implant type was identified through the primary surgery operative note and crosschecked with implant sheets. Implant designs were only included in our analysis if more than 30 images per model were identified14.

From the medical records of 313 patients, a total of 357 images were included in this analysis.

Although Zimmer and Biomet merged (Zimmer Biomet), these were treated as two distinct manufacturers. The following 4 designs from the four industry leading manufacturers were included: Biomet Echo Bi-Metric (Zimmer Biomet), Biomet Universal RingLoc (Zimmer Biomet), Depuy Corail (Depuy Synthes), Depuy Pinnacle (Depuy Synthes), LINK Lubinus SP II, LINK Vario cup, and Zimmer Versys FMT and Trilogy (Zimmer Biomet). Implant designs that did not meet the 30-implant threshold were not included. Figure1 demonstrated an example of Cup and Stem anteriorposterior (AP) radiographs of each included implant design. The four types of implants are denoted as type A, type B, type C, and type D respectively in this paper.

Demonstrated an example of cup and stem radiographs of each included implant design.

We used convolutional neural network-based (CNN) algorithms for classification of hip implants. Our training data consist of images of anteroposterior (AP) view of the hips. For each image, we manually cut the image into two parts: the stem and the cup. We will train four CNN models, the first one using stem images (stem network), the second one using cup images (cup network), and the third one using the original uncut images (combined network). The fourth one is an integration of the models trained with stem network and the cup network (joint network).

Since the models involve millions of parameters, while our data set only contained less than one thousand images, it was infeasible to train a CNN model from scratch using our data. Therefore, we adopted the transfer learning framework to train our networks17. The transfer learning framework is a paradigm in the machine learning literature that is widely applied in scenarios where the training data is scarce compared to the scale of the model18. Under the transfer learning framework, the model is first initialized to some model pretrained with other data sets that contain enough data for a different but related task. Then, we tune the model using our data set by performing gradient descent (backward-propagation) only on the last two layers of the networks. As the parameters in the last two layers of the network are comparable with the size of our data set (for the target task), and the parameters in the previous layers have been tuned from the pre-trained model, the resulting network model can have satisfactory performance on the target task.

In our case, our CNN models we used are based on the established ResNet50 network pre-trained on the ImageNet data set19. The target task and our training data sets correspond to the images of the AP views of the hips (stem, cup, and combined).

Figure2 demonstrates the overview of the framework of our deep learning-based method.

Overview of the framework of our deep learning-based method.

Our dataset contained 714 images from 4 different kinds of implants.

We followed standard procedures to pre-process our training data so that it could work with a network trained on ImageNet. We rescaled each image to a size of 224*224 and normalized it according to ImageNet standards. We also performed data augmentation, i.e., random rotation, horizontal flips, etc., to increase the amount of training data and make our algorithm robust to the orientation of the images.

We first divided the set of patients into three groups of sizes~60% (group 1),~30% (group 2), and~10% (group 3). This split technique was used on a per-design basis to ensure the ratio of each implant remained constant. Next, we used the cup and stem images of patients in group 1 for training, those of patients in group 2 for validation, and those of patients in group 3 for testing. The validation set was used to compute cross-validation loss for hyper-parameter tuning and early stopping determination.

We adopted the adaptive gradient method ADAM20 to train our models. Based on the cross-validation loss, we chose the hyper-parameters for ADAM as (learning rate (mathrm{alpha }) = 0.001, ({upbeta }_{1}=0.9, {beta }_{2}=0.99, epsilon ={10}^{-8},) weight_decay=0). The maximum number of epochs was 1000 and the batch size was 16. The early stopping threshold was set to 8. During the training process of each network, the early stopping threshold was hit after around 50 epochs. As we mentioned above, we trained four networks in total.

The first network is trained with the stem images, the second with the cup images. The third network is trained with the original uncut images, which is one way we propose to combine the power of stem images and cup images. We further integrate the first and the second network as an alternative way of jointly utilizing stem and cup images. The integration was done via the following logistic-regression based method. We collected the outputs of the stem network and the cup network (both are of the form of a 4-dimensional vector, with each element corresponding to the classification weight the network gives to the category of implants), and then fed them as the input to a two-layer feed-forward neural network, and trained the network with the data from the validation set. The integration is similar to a weighted-voting procedure among the outputs of the stem network and the cup network, with the weighting votes computed through the validation data set. Note that the above construction relied on our dataset division procedure, where the training set, validation set, and testing set, each contained the stem and cup images of the same set of patients. We referred to the resulting network constructed from the outputs of stem network and cup network as the joint network.

We tested our models (stem, cup, Joint) using the testing set. The prediction result on each testing image was a 4-dimensional vector, with each coordinate representing the classification confidence of the corresponding category of implants.

Since we were studying a multi-class classification problem, we would directly present the confusion matrices of our methods on the testing data, and compute the operation characteristics generalized for multi-class classification.

The institutional review board approved the study with a waiver of informed consent because all images were anonymized before the time of the study.

Read more:
Automated identification of hip arthroplasty implants using artificial intelligence | Scientific Reports - Nature.com

Read More..

Harnessing the power of artificial intelligence – UofSC News & Events – SC.edu

On an early visit to the University of South Carolina, Amit Sheth was surprised when 10 deans showed up for a meeting with him about artificial intelligence.

Sheth the incoming director of the universitys Artificial Intelligence Institute at the time thought he would need to sell the deans on the idea. Instead, it was them pitching the importance of artificial intelligence to him.

All of them were telling me why they are interested in AI, rather than me telling them why they should be interested in AI, Sheth said in a 2020 interview with the universitys Breakthrough research magazine. The awareness of AI was already there and the desire to incorporate AI into the activities that their faculty and students do was already on the campus.

Since the university announced the institute in 2019, that interest has only grown. There are now dozens of researchers throughout campus exploring how artificial intelligence and machine learning can be used to advance fields from health care and education to manufacturing and transportation. On Oct. 6, faculty will gather at the Darla Moore School of Business for a panel discussion on artificial intelligence led by Julius Fridriksson, vice president for research.

South Carolina's efforts stand out in several ways: the collaborative nature of research, which involves researchers from many different colleges and schools; a commitment to harnessing the power of AI in an ethical way; and the university's commitment to projects that will have a direct, real-world impact.

This week, as the Southeastern Conference marks AI in the SEC Day, we look at some of the remarkable efforts of South Carolina researchers in the area of artificial intelligence.

See the original post:
Harnessing the power of artificial intelligence - UofSC News & Events - SC.edu

Read More..

In iOS 16 A New iPhone Tool Makes Photobombing A Thing of the Past – CNET

This story is part of WWDC 2022, CNET's complete coverage from and about Apple's annual developers conference.

Apple'siOS 16will include a lot of new iPhone features likeeditable Messagesand acustomizable lock screen. But there was one feature that truly grabbed my attention during WWDC 2022, despite taking up less than 15 seconds of the event.

The feature hasn't been given a name, but here's how it works: You tap and hold on a photo to separate a picture's subject, like a person, from the background. And if you keep holding, you can then "lift" the cutout from the photo and drag it into another app to post, share or make a collage, for example.

Technically, the tap-and-lift photo feature is part of Visual Lookup, which was first launched with iOS 15 and can recognize objects in your photos such as plants, food, landmarks and even pets. In iOS 16, Visual Lookup let you lift that object out of a photo or PDF by doing nothing more than tapping and holding.

During the WWDC, Apple showed someone tapping and holding on the dog in a photo to lift it from the background and share in a Message.

Robby Walker, Apple senior director of Siri Language and Technologies, demonstrated the new tap-and-lift tool on a photo of a French bulldog. The dog was "cut out" of the photo and then dragged and dropped into the text field of a message.

"It feels like magic," Walker said.

Sometimes Apple overuses the word "magic," but this tool does seem impressive. Walker was quick to point out that the effect was the result of an advanced machine-learning model, which is accelerated by core machine learning and Apple's neural engine to perform 40 billion operations in a second.

Knowing the amount of processing and machine learning required to cut a dog out of a photo thrills me to no end. Many times new phone features need to be revolutionary or solve a serious problem. I guess you could say that the tap-and-hold tool solves the problem of removing the background of a photo, which to at least some could be a serious matter.

I couldn't help notice the similarity to another photo feature in iOS 16. On the lock screen, the photo editor separates the foreground subject from the background of the photo used for your wallpaper. This makes it so lock screen elements like the time and date can be layered behind the subject of your wallpaper but in front of the photo's background. It makes it look like the cover of a magazine.

I tried the new Visual Lookup feature in the Public Beta for iOS 16. I am still impressed how quickly and reliably it works. If you have a spare iPhone to try it on, a developer beta for iOS 16 is already available and a public beta version of iOS 16 will be out in July.

For more, check out everything that Apple announced at WWDC 2022, including the new M2 MacBook Air.

See the original post here:
In iOS 16 A New iPhone Tool Makes Photobombing A Thing of the Past - CNET

Read More..

UF partners with CIA on improving cybersecurity – News – University of Florida – University of Florida

From the shutdown of an oil pipeline to disrupted access to government, business and healthcare system databases, high-profile cyberattacks in 2021 prompted heightened interest in improving the nations cybersecurity.

Answers on how to do that may come from a collaboration between the University of Florida and the U.S. Central Intelligence Agency, the first of its kind in the nation.

The university and the CIA have entered an agreement to study how artificial intelligence and machine learning applications (AIML) can be used to detect and deter malicious agents that infiltrate computer networks. The work will be carried out by researchers associated with UFs Florida Institute for National Security.

"If you're operating retroactively in cybersecurity, oftentimes you are too late," said Damon Woodard, principal researcher and newly appointed director of the Florida Institute for National Security. "This collaboration will accelerate our ability to understand and expand the research on AI applications of AIML to cybersecurity."

One area of research will be on reinforcement learning, which attempts to mimic how humans learn through trial-and-error. Woodard said little work has been done on this method of machine learnings application to cybersecurity problems. Researchers will explore this technology on simple problems and then see if solutions can be scaled up.

In terms of a cyberattack, you are trying to figure out what the person attacking you is trying to do so you can anticipate and make adjustments on your side to stop them, Woodard said.

The Identity Theft Resource Center reported in January there were 1,603 cyberattack-related data breaches in 2021, an increase of about 500 over the previous year. Ransomware attacks are also on the rise, doubling in each of the past two years, the nationally recognized nonprofit organization said.

The hope, Woodard said, is the work will revolutionize the way the world thinks about cybersecurity and provide insights and technologies that can better protect data and strengthen security across both the government and private sectors. The team also includes two UF graduate students.

Im excited to see the ramifications of this project in the security domain as well as in other domains, such as biomedical and business, said Olivia Dizon-Paradis, a doctoral student in Electrical and Computer Engineering. Im hoping my involvement in this project will help jumpstart my research career in lifelong machine learning.

Stephen Wormald, also a doctoral student in Electrical and Computer Engineering, said he was excited about being able to work with leading researchers to develop state-of-the-art technology.

My involvement will develop personal skills in research, writing and mathematics that I can use long-term in industry, Wormald said. I hope to apply my skills to develop technology and study basic research problems that improve individuals quality of life.

The Florida Institute for National Security was launched in May with the goal of taking a leading role in multidisciplinary research on national security through long-term partnerships with industry, academe and government that lead to commercial products and spin-off companies.

The project is the latest initiative in UFs sweeping focus on artificial intelligence, a $1billion effort to advance AI across the curriculum and in research and industry. The universitys initiative -- andthe work ofthe institute -- is aided by access totheHiPerGator supercomputer.

Woodard said working with the CIA offers the opportunity to share project expertise and provides exposure to many diverse challenges.

"Working with the CIA is a major benefit because they present interesting constraints in cybersecurity," Woodard said. "You're dealing with worst-case scenarios to prepare for everything from low-quality data to low-resolution images. This level of research allows us to reach our full capacity for understanding potential shortcomings."

Excerpt from:
UF partners with CIA on improving cybersecurity - News - University of Florida - University of Florida

Read More..

New study to probe machine learning role in treating depression – The Indian Express

In one of the first studies of its kind, a machine learning approach will be used to determine optimal treatments for patients suffering from depression, especially in the Indian context. If successful, this technological tool can then be used in low and middle-income countries too.

The US National Institute of Mental Health-funded study will be a collaborative effort between Sangath, a 26-year-old mental health research organisation based in Goa with regional hubs in Pune, Bhopal and New Delhi, and AIIMS Bhopal.

Dr Vikram Patel from Harvard Medical School and co-founder of Sangath said that this precision medicine approach for treating depression will also examine whether polygenic risk scores can predict response to either anti-depressant medication or psychological counselling. It is a four-year project and will be implemented closely with AIIMS Bhopal. The study will have a sample size of 1,500 patients, he said. He and Dr Steve Hollon from Vanderbilt University will lead the study as project investigators.

The machine learning approach will take into consideration various data points like specific genetic factors, family information, medical and clinical history that will predict treatment outcomes in patients with depression. The research study is based on the assumption that using a machine learning approach to select the optimal treatment for each individual patient will prove to be more effective than leaving things to chance.

Depression is a major contributor to the global disease burden. Recently, WHO chief scientist Dr Soumya Swaminathan tweeted that one billion people live with a mental health disorder. Suicide accounts for one in 100 deaths, specially among adolescents. Still the government spends two per cent of health budgets on mental health care. At WHO, the pandemic has sparked a push for global mental health transformation, Dr Swaminathan tweeted.

Study researchers said, In the case of moderate to severe depression, a patient is either offered medicines (antidepressant medication) or counselling or both. However, which is the right treatment for each patient is a difficult decision to make and the protocol involves trying out various alternatives. The research study aims to improve the outcomes of treatment for patients with depression by personalising the treatment options.

The study is being conducted in collaboration with the National Health Mission, Madhya Pradesh, the Madhya Pradesh health department and AIIMS Bhopal for improving depression care in low-resource, primary healthcare settings.

Follow this link:
New study to probe machine learning role in treating depression - The Indian Express

Read More..

How companies can benefit from upskilling their employees with AI and Machine learning – The Financial Express

By Glenn Campbell

The Future of Work has arrived much sooner than many of us have anticipated. Today, we are living in a world that is tech-driven, and ever-developing new technologies like AI, Automation, and Big Data have bought a paradigm shift in the job markets by bringing in powerful opportunities.

Businesses across the world are responding to a high-tech future of work by upgrading their existing skills and building new capabilities to stay relevant with the times. They are increasingly adopting new technologies to grow and scale deep thinking and analysis.

Not only this, with upskilling being the new trend, millions of employees today want to learn on the job and companies are investing heavily in learning and development programmes for the

employees. Additionally, organizations are expecting their employees to constantly evolve and be multi-skilled while on the job. AI technologies in the future will help increase demand for skills insulated from automation, such as creativity, leadership, and organisational and interpersonal communication skills. AI and automation-based solutions are already contributing to the transition from analogue to digital vocational education and training (VET) systems. And keeping abreast of this new technology and its application within a business is paramount for todays leaders.

Some Common benefits of AI and Machine Learning include:

Better, faster decision-making: The companies are harnessing the potential of AI and Machine Learning to identify their gaps and optimise these to support their growth. These capabilities are fostering a culture of new-age development within companies where employees are encouraged to solve problems with critical thinking and pursue new ideas for the overall growth of the company. All these factors are playing a vital role in ensuring better and faster decision making.

Increased operational efficiency: The new advancements in AI and Machine Learning promise

continuous development in operational excellence of new-age companies. Today, companies are at the forefront of using training and development to support the implementation and adoption of new technologies. With the help of expert teams, the companies are able to identify gaps and adopt effective and customised solutions or a mix of workplace solutions and skills products to intensify their growth.

How companies can benefit by upskilling their workforce?

All these future-oriented training and upskilling programs are utterly essential for organisations to fight the skill gap. And an upskilled and trained set of employees will create a more cross-trained workforce that will automatically translate into the enhancement of the teams productivity. For organisations to stay relevant with the changing times and ensure productivity, keeping their employees happy and providing them with a self-improving environment is very important. Training and Upskilling programs are an investment and shows that companies care for their employees future. This plays a vital role in increasing their loyalty to the companies and ensures high retention rate. Upskilling is rather substantial return on a smaller investment (ROI) as it ensures not only winning the trust of the employees but further saves organisations time and money that they invest in replacing employees. Further, this will also help organisations to get away with the tedious process of hiring the new talent as upskilled employees may recommend the organization to others. New-age learning and development strategies to address the skill gap will help companies build cognitive capabilities, social skills, increased adaptability, and resilience in the longer run.The author is the executive director of Deakin University.

More:
How companies can benefit from upskilling their employees with AI and Machine learning - The Financial Express

Read More..