Page 2,034«..1020..2,0332,0342,0352,036..2,0402,050..»

The AI Leader Trying To Bring More Latin American Women Into The Tech Industry – Forbes

Beln Snchez Hidalgo, a senior data scientist at DataRobot, is passionate about getting more women into Artificial Intelligence and machine learning roles. Thats why she created WaiCAMP by DataRobot University, a scholarship-based seven week bootcamp-style course for women in Latin America to learn applied data science and AI-related skills.

They just wrapped their first cohort, which provided scholarships to 60 Latin American women living across 11 different countries, and are hoping to expand globally.

I spoke to Snchez Hidalgo about whats next for the program along with her ideas for how to close the gender gap in AI.

DataRobot's Belen Snchez Hidalgo in Quito

Amy Shoenthal: Tell me about your career pivot from public policy to tech and how you arrived at DataRobot.

Beln Snchez Hidalgo: I worked for over a decade in public policy and international development. A big part of my work was innovation and tech, looking into how to foster productivity for small and medium sized enterprises. When I was working at the World Bank in 2016, all these reports about the future of work started coming out.

I panicked about how the workplace was going to change and how automation was going to take jobs. I told my husband, Zaki, that our skills werent going to be valuable in three years. A few days later, he sent me a picture of one of the Amazon drones making deliveries in Washington, DC, joking, the robots are coming!

Kidding aside, thats when I made the decision to quit the World Bank and learn more about automation. I signed up for a 12 week intense data science immersive course at General Assembly, and that was the beginning of the transition.

After that, I was able to get my first job as a data scientist and technology advisor for the Inter-American Development Bank, combining the skills I had from my public policy and development days with my new data science education.

In 2019, I officially moved to the tech industry and started working at DataRobot. I began as an applied data science associate through a six month program where the company trained people who had experience in a specific field but were new to data science. A lot of companies at the time were willing to invest in this type of training so people with other industry experience could make an easy transition to tech.

Shoenthal: What motivated you to create this program and how did DataRobot support that?

Snchez Hidalgo: One of the cool programs DataRobot has is called Dream Big, a weekend immersion where employees are invited to think about their long term goals. I was a bit skeptical at first, but I went and it was actually amazing. It gave me the chance to think about what I wanted to achieve in life, from health to finance and more. One of the areas we explored was legacy, which can be defined in so many different ways.

For many, legacy was all about raising their kids. I've always been driven to do things that have a positive impact on the lives of others. Thats why I originally went into public policy. As I had made the transition to tech, I realized I was missing that piece.

That weekend offered clarity on two things. One of them was about celebrating my two identities - I'm Latina, from Ecuador, and I'm a woman.

Second, I wanted to do something that accelerated the adoption of artificial intelligence in Latin America. Having worked in the tech and innovation policy space, I know how much new technologies can accelerate the competitiveness and productivity of nations.

As we have seen throughout history, when regions are not on top of new technologies, that can translate to slower economic growth. I wanted to see my region flourish.

Combining my identities with my passion, I realized my legacy could be to bring more women into this industry. So I put all these pieces together and decided to create a training program for women in Latin America.

I started with a pitch. My first outreach was to the team at Women in Ai, an international organization with a community of 5,000 AI professionals worldwide. They said my idea aligned perfectly with what they were trying to do. Susan Verdiguel, the ambassador from Women in Ai Mexico, brought on an amazing team of volunteers to get the first cohort together. Even though the partnership was with Women in AI Mexico, the program reached 60 women in 11 Latin American and Caribbean countries.

Then I spoke to my colleagues at DataRobot and they were on board immediately. They realized this would be a small lift that would generate a huge impact. I was able to find amazing ambassadors within the organization. We had a team of people across the marketing, localization, logistics, curriculum development, and so many other departments. It was really a team effort.

It took six months of development, and we launched in August.

Shoenthal: Theres been a lot written about the AI gender gap and the pitfalls of not having a diverse staff on hand to program AI software, hardware and applications. Can you talk to me about why its so important to diversify the industry?

Snchez Hidalgo: More diversity would help avoid biased AI solutions. You have algorithms defining what type of marketing youre going to receive or whether youre going to be approved for a mortgage or not.

The World Economic Forum did research that showed only 22% of AI professionals are women.

How are we perpetuating stereotypes through AI? If you think about the voices of all the AI assistants like Alexa, their default is women because women are seen as more submissive. As long as machine learning lacks diverse perspectives theyre going to produce biased results. AI tools will reflect the biases of those who are building them. Bringing more diverse women into the design process will help us avoid those pitfalls.

We also have to ask, how AI is impacting the workplace? We are still expected to see more jobs replaced by automation. But Ai is also going to create more jobs. The part worrying me is that there have been studies that show women will be more impacted than men in this transition towards new jobs.

Administrative roles like secretaries will be easier to automate. So women, who hold the majority of those roles, need to make the transition to the new jobs that AI is going to create, and they need the training and tools to do that. Plus, once they enter the tech industry in general, they should see better benefits and higher compensation.

Shoenthal: Why are you focusing specifically on Latin America for this program? Do you hope to expand it to other regions down the line?

Snchez Hidalgo: We took the last few months to evaluate the results of the first program and receive feedback from the participants and the community. Theres a lot of appetite to go beyond Latin America. I want to expand so we can make it available to women on a global basis. Were trying to figure out what it will take to make that leap.

Shoenthal: What would you say to young women who are curious about exploring AI as a possible career path?

Snchez Hidalgo: Don't be afraid to start learning new skills. You dont have to go back to college or university. Were living in a time where information is accessible. Take advantage of online courses, bootcamps and more. Its certainly a time commitment, but given whats at stake, its worth taking action. Take it seriously and take advantage of all the different ways you can learn.

The other thing is, in order to be involved in AI machine learning, you dont necessarily have to become a programmer. If youre afraid of coding, thats not a barrier to this industry. All my previous work and expertise was relevant to what Im doing now. Data scientists need support to fully understand certain business problems. Learning more just gives you more choices. Don't underestimate the value of that.

Link:
The AI Leader Trying To Bring More Latin American Women Into The Tech Industry - Forbes

Read More..

AI and machine learning are improving weather forecasts, but they won’t replace human experts – Herald & Review

Meteorologist Todd Dankers monitors weather patterns in Boulder, Colorado, Oct. 24, 2018. Hyoung Chang/The Denver Post via Getty Images

A century ago, English mathematician Lewis Fry Richardson proposed a startling idea for that time: constructing a systematic process based on math for predicting the weather. In his 1922 book, Weather Prediction By Numerical Process, Richardson tried to write an equation that he could use to solve the dynamics of the atmosphere based on hand calculations.

It didnt work because not enough was known about the science of the atmosphere at that time. Perhaps some day in the dim future it will be possible to advance the computations faster than the weather advances and at a cost less than the saving to mankind due to the information gained. But that is a dream, Richardson concluded.

People are also reading

A century later, modern weather forecasts are based on the kind of complex computations that Richardson imagined and theyve become more accurate than anything he envisioned. Especially in recent decades, steady progress in research, data and computing has enabled a quiet revolution of numerical weather prediction.

For example, a forecast of heavy rainfall two days in advance is now as good as a same-day forecast was in the mid-1990s. Errors in the predicted tracks of hurricanes have been cut in half in the last 30 years.

There still are major challenges. Thunderstorms that produce tornadoes, large hail or heavy rain remain difficult to predict. And then theres chaos, often described as the butterfly effect the fact that small changes in complex processes make weather less predictable. Chaos limits our ability to make precise forecasts beyond about 10 days.

As in many other scientific fields, the proliferation of tools like artificial intelligence and machine learning holds great promise for weather prediction. We have seen some of whats possible in our research on applying machine learning to forecasts of high-impact weather. But we also believe that while these tools open up new possibilities for better forecasts, many parts of the job are handled more skillfully by experienced people.

Australian meteorologist Dean Narramore explains why its hard to forecast large thunderstorms.

Predictions based on storm history

Today, weather forecasters primary tools are numerical weather prediction models. These models use observations of the current state of the atmosphere from sources such as weather stations, weather balloons and satellites, and solve equations that govern the motion of air.

These models are outstanding at predicting most weather systems, but the smaller a weather event is, the more difficult it is to predict. As an example, think of a thunderstorm that dumps heavy rain on one side of town and nothing on the other side. Furthermore, experienced forecasters are remarkably good at synthesizing the huge amounts of weather information they have to consider each day, but their memories and bandwidth are not infinite.

Artificial intelligence and machine learning can help with some of these challenges. Forecasters are using these tools in several ways now, including making predictions of high-impact weather that the models cant provide.

In a project that started in 2017 and was reported in a 2021 paper, we focused on heavy rainfall. Of course, part of the problem is defining heavy: Two inches of rain in New Orleans may mean something very different than in Phoenix. We accounted for this by using observations of unusually large rain accumulations for each location across the country, along with a history of forecasts from a numerical weather prediction model.

We plugged that information into a machine learning method known as random forests, which uses many decision trees to split a mass of data and predict the likelihood of different outcomes. The result is a tool that forecasts the probability that rains heavy enough to generate flash flooding will occur.

We have since applied similar methods to forecasting of tornadoes, large hail and severe thunderstorm winds. Other research groups are developing similar tools. National Weather Service forecasters are using some of these tools to better assess the likelihood of hazardous weather on a given day.

An excessive rainfall forecast from the Colorado State University-Machine Learning Probabilities system for the extreme rainfall associated with the remnants of Hurricane Ida in the mid-Atlantic states in September 2021. The left panel shows the forecast probability of excessive rainfall, available on the morning of Aug. 31, more than 24 hours ahead of the event. The right panel shows the resulting observations of excessive rainfall. The machine learning program correctly highlighted the corridor where widespread heavy rain and flooding would occur. Russ Schumacher and Aaron Hill, CC BY-ND

Researchers also are embedding machine learning within numerical weather prediction models to speed up tasks that can be intensive to compute, such as predicting how water vapor gets converted to rain, snow or hail.

Its possible that machine learning models could eventually replace traditional numerical weather prediction models altogether. Instead of solving a set of complex physical equations as the models do, these systems instead would process thousands of past weather maps to learn how weather systems tend to behave. Then, using current weather data, they would make weather predictions based on what theyve learned from the past.

Some studies have shown that machine learning-based forecast systems can predict general weather patterns as well as numerical weather prediction models while using only a fraction of the computing power the models require. These new tools dont yet forecast the details of local weather that people care about, but with many researchers carefully testing them and inventing new methods, there is promise for the future.

A forecast from the Colorado State University-Machine Learning Probabilities system for the severe weather outbreak on Dec. 15, 2021, in the U.S. Midwest. The panels illustrate the progression of the forecast from eight days in advance (lower right) to three days in advance (upper left), along with reports of severe weather (tornadoes in red, hail in green, damaging wind in blue). Russ Schumacher and Aaron Hill, CC BY-ND

The role of human expertise

There are also reasons for caution. Unlike numerical weather prediction models, forecast systems that use machine learning are not constrained by the physical laws that govern the atmosphere. So its possible that they could produce unrealistic results for example, forecasting temperature extremes beyond the bounds of nature. And it is unclear how they will perform during highly unusual or unprecedented weather phenomena.

And relying on AI tools can raise ethical concerns. For instance, locations with relatively few weather observations with which to train a machine learning system may not benefit from forecast improvements that are seen in other areas.

Another central question is how best to incorporate these new advances into forecasting. Finding the right balance between automated tools and the knowledge of expert human forecasters has long been a challenge in meteorology. Rapid technological advances will only make it more complicated.

Ideally, AI and machine learning will allow human forecasters to do their jobs more efficiently, spending less time on generating routine forecasts and more on communicating forecasts implications and impacts to the public or, for private forecasters, to their clients. We believe that careful collaboration between scientists, forecasters and forecast users is the best way to achieve these goals and build trust in machine-generated weather forecasts.

Russ Schumacher receives funding from the National Oceanic and Atmospheric Administration for research on applying machine learning to improve forecasts of high-impact weather.

Aaron Hill receives funding from the National Oceanic and Atmospheric Administration to research machine learning applications that improve high-impact weather forecasts.

This article is republished fromThe Conversationunder a Creative Commons license.

The fast winds, rapid rainfall, and huge storm surges of hurricanes make this natural disaster responsible for many deaths and millions of dollars worth of damage each year. Capable of triggering flash floods, mudslides, and tornadoes, even weak hurricanes can cause extensive destruction to property, infrastructure, and crops. Other hurricanes remain at sea and never make landfall, limiting the destruction they cause. Advancements in technology, particularly satellite imaging, have greatly improved warnings and advisories that prompted live-saving evacuations. But not all lives can be spared.

Also known as tropical cyclones, hurricanes are large, wet storms with high winds that form over warm water. Hurricane season in the Atlantic Basinthe Atlantic Ocean, Gulf of Mexico, and the Caribbean Searuns from June 1 to Nov. 30 each year, though some hurricanes do form outside of this season. Many tropical storms are produced on an average year, though not all reach the strength of hurricanes.

Hurricanes are rated using the Saffir-Simpson Hurricane Wind Scale. Category 1 hurricanes have the lowest wind speeds at 74-95 miles per hour, and Category 5 hurricanes have the strongest winds at 157 miles per hour or higher. Storms that are Category 3 and above are considered major hurricanes.

It seems hurricanes and other weather disasters are becoming increasingly destructive. There were 30 named storms and14 hurricanes during the 2020 Atlantic hurricane season, with seven of those 14 hurricanes considered major. According to the National Oceanic and Atmospheric Administration (NOAA), 2020 marked "the fifth consecutive above-normal Atlantic hurricane season." The NOAA predicted another above-average season for 2021, a forecast already coming true.

Some hurricane seasons are worse than others. In 1920, the strongest hurricane was a Category 2 storm that killed one person in Louisiana. Others are devastating and destroy entire cities. Hurricane Katrina, an infamous storm that struck the U.S. in 2005, delivered lasting damage to New Orleans and cost the country over $100 billion.

Stacker obtained hurricane data, updated in 2020, from the NOAA's Atlantic Oceanic and Meteorological Laboratory. A list of notable events or facts from each year was compiled from news, scientific, and government reports. Read on to learn about the noteworthy tropical storms and hurricanes from the year you were born.

You may also like:How to prepare for 15 types of emergencies

- Named storms: 5 (6.00 less than average)

- Hurricanes: 2 (3.91 less than average)

- Category 3 or higher hurricanes: 1 (1.52 less than average)

Because there was no satellite imagery in 1919, meteorologists temporarily lost track of a Category 4 Atlantic Gulf hurricane when ships stopped transmitting information about it. This storm was the deadliest hurricane ever to hit the Texas Coastal Bend, and it caused more than 500 people to die or be lost due to sinking or missing ships.

[Pictured: Map plotting the track and the intensity of the 1919 hurricane, according to the SaffirSimpson scale.]

- Named storms: 5 (6.00 less than average)

- Hurricanes: 4 (1.91 less than average)

- Category 3 or higher hurricanes: 0 (2.52 less than average)

The 1920 hurricane season was less active than usual. One of the year's most notable storms was a Category 2 hurricane that hit Louisiana, killing one person. The storm ruined the sugar crop and caused $1.45 million in total damages.

- Named storms: 7 (4.00 less than average)

- Hurricanes: 5 (0.91 less than average)

- Category 3 or higher hurricanes: 2 (0.52 less than average)

On Oct. 28, 1921, Tampa Bay, Florida, experienced its most damaging hurricane since 1848. The unnamed hurricane killed eight people and cost over $5 million, not adjusted for inflation. It smashed boats against docks and destroyed parts of the local sea wall.

[Pictured: Wreckage of Safety Harbor Springs Pavillion after the 1921 hurricane.]

- Named storms: 5 (6.00 less than average)

- Hurricanes: 3 (2.91 less than average)

- Category 3 or higher hurricanes: 1 (1.52 less than average)

No hurricanes made landfall in the U.S. during the 1922 hurricane season. However, a hurricane that downgraded to a tropical storm did strike El Salvador, overflowing the Rio Grande and causing more than $5 million of damage.

- Named storms: 9 (2.00 less than average)

- Hurricanes: 4 (1.91 less than average)

- Category 3 or higher hurricanes: 1 (1.52 less than average)

The 1923 hurricane season featured the most tropical storms since 1916. This count includes four hurricanes that touched down in the U.S., three of which made landfall along the Gulf Coast and one that hit Massachusetts.

- Named storms: 11 (0.00 more than average)

- Hurricanes: 5 (0.91 less than average)

- Category 3 or higher hurricanes: 2 (0.52 less than average)

A Category 5 hurricane struck Cuba in 1925. This unnamed storm was the first Category 5 hurricane recorded in the database managed by the National Hurricane Center.

- Named storms: 4 (7.00 less than average)

- Hurricanes: 1 (4.91 less than average)

- Category 3 or higher hurricanes: 0 (2.52 less than average)

The 1925 season started late, with the first hurricane beginning on Aug. 18. That season also included a hurricane that made landfall in Florida on Nov. 30, the latest hurricane to hit the U.S.

- Named storms: 11 (0.00 more than average)

- Hurricanes: 8 (2.09 more than average)

- Category 3 or higher hurricanes: 6 (3.48 more than average)

Of the eight hurricanes in the 1926 season, four proved particularly deadly. A storm in July killed 247 people, an August storm killed 25, a September storm killed 372, and a hurricane in October 1926 killed 709.

- Named storms: 8 (3.00 less than average)

- Hurricanes: 4 (1.91 less than average)

- Category 3 or higher hurricanes: 1 (1.52 less than average)

No hurricanes struck the U.S. in 1927. The most significant hurricane of the season was nicknamed The Great August Gales, and it was the deadliest tropical storm to hit Canada in the 1920s.

- Named storms: 6 (5.00 less than average)

- Hurricanes: 4 (1.91 less than average)

- Category 3 or higher hurricanes: 1 (1.52 less than average)

The Okeechobee Hurricane of 1928 was one of the deadliest storms ever to hit the U.S., killing between 2,500 and 3,000 people. The hurricane also hit Puerto Rico, landing on Sept. 13, the feast day of Saint Philip. It is the second hurricane to hit Puerto Rico on this day of celebration.

- Named storms: 5 (6.00 less than average)

- Hurricanes: 3 (2.91 less than average)

- Category 3 or higher hurricanes: 1 (1.52 less than average)

The Great Bahamas Hurricane, also known as the Great Andros Island hurricane, barely moved over the course of three days, hovering above Nassau and Andros in the Bahamas. It was also the first hurricane to approach the Bahamas from a northeast direction.

- Named storms: 3 (8.00 less than average)

- Hurricanes: 2 (3.91 less than average)

- Category 3 or higher hurricanes: 2 (0.52 less than average)

Though 1930 had a quiet hurricane season overall, it also had one of the Atlantic Ocean's deadliest hurricanes. The Dominican Republic Hurricane is the fifth deadliest storm in the region's history. It created a path of destruction up to 20 miles wide and killed between 2,000 and 8,000 people in the Dominican Republic, though it also brought much-needed rain to Puerto Rico.

- Named storms: 13 (2.00 more than average)

- Hurricanes: 3 (2.91 less than average)

- Category 3 or higher hurricanes: 1 (1.52 less than average)

In 1931, a Category 4 hurricane hit Belize, also known as British Honduras, and killed about 2,500 people. It is the deadliest hurricane to hit Belize in recorded history.

- Named storms: 15 (4.00 more than average)

- Hurricanes: 6 (0.09 more than average)

- Category 3 or higher hurricanes: 4 (1.48 more than average)

The Huracn de Santa Cruz del Sur, a Category 4 storm, hit Cuba in 1932 and caused 3,500 fatalities. Most of the deaths were due to a storm surge, a flash flood that rose to over 20 feet.

- Named storms: 20 (9.00 more than average)

- Hurricanes: 11 (5.09 more than average)

- Category 3 or higher hurricanes: 6 (3.48 more than average)

The 1933 season is the Atlantic Basin's third most active hurricane season in recorded history. It also held the record for the highest amount of wind energy created during the Atlantic hurricane season until 2011.

- Named storms: 13 (2.00 more than average)

Read more:
AI and machine learning are improving weather forecasts, but they won't replace human experts - Herald & Review

Read More..

DeepSig Named to CB Insights AI 100 List of Most Innovative Artificial Intelligence Startups for 2022 – Business Wire

ARLINGTON, Va.--(BUSINESS WIRE)--DeepSig, a leader in artificial intelligence (AI) and machine-learning (ML) innovation in wireless communications, today announced that it has ranked on CB Insights annual AI 100 list. The AI 100 ranking recognizes the 100 most promising private artificial intelligence companies in the world.

"This is the sixth year that CB Insights has recognized the most promising private artificial intelligence companies with the AI 100. This year's cohort spans 13 industries, working on everything from recycling plastic waste to improving hearing aids," said Brian Lee, SVP of CB Insights Intelligence Unit. "Last year's AI 100 companies had a remarkable run, raising more than $6 billion, including 20 mega-rounds worth more than $100 million each. Were excited to watch the companies on this years list continue to grow and create products and services that meaningfully impact the world around them.

We are honored to be recognized as one of the most promising AI startups by CB Insights for the third year in a row, said Jim Shea, DeepSig CEO. DeepSigs continued growth and success would not be possible without our uniquely talented, fast-moving team and support from our partners and investors. AI is rapidly transforming 5G Radio Access Networks (RAN) and the path to AI-Native 6G, and we are committed to developing software which continues to improve performance and lower costs in both private enterprise and public mobile networks.

Utilizing the CB Insights platform, the research team picked 100 private market vendors from a pool of over 7,000 companies, including applicants and nominees. They were chosen based on factors including R&D activity, proprietary Mosaic scores, market potential, business relationships, investor profile, news sentiment analysis, competitive landscape, team strength, and tech novelty. The research team also reviewed thousands of Analyst Briefings submitted by applicants.

DeepSig is developing AI/ML technology to fundamentally transform wireless communications and radio sensing systems. DeepSigs unique and patented AI-Native software for Open vRAN and other radio components make wireless networks more cost-effective, autonomous, efficient and eco-friendly for access and usage of the radio spectrum. See the recent report released with a major industry partner, explaining how DeepSig is transforming the wireless air-interface.

Quick facts on the 2022 AI 100:

About CB Insights

CB Insights builds software that enables the world's best companies to discover, understand, and make technology decisions with confidence. By marrying data, expert insights, and work management tools, clients manage their end-to-end technology decision-making process on CB Insights. To learn more, please visit http://www.cbinsights.com.

About DeepSig

DeepSig, Inc. is a venture-backed and product-centric technology company developing revolutionary wireless processing software solutions using cutting edge machine learning techniques to transform baseband processing, wireless sensing and other key wireless applications. Known as deep learning, a proven technology in vision and speech processing now accelerates 5G network performance, capacity, operational efficiency and the customer experience. For more information, visit https://www.deepsig.ai.

View post:
DeepSig Named to CB Insights AI 100 List of Most Innovative Artificial Intelligence Startups for 2022 - Business Wire

Read More..

The Computer Vision Market in 2022 – Datamation

Computer vision is a subfield of artificial intelligence (AI) that trains computer software on understanding and extracting information from images and video data.

Computer vision seeks to imitate and automate the human visual system. The technology can be used in facial recognition, image matching, and visual object identification.

See below to learn all about the global computer vision market:

See more: Top Performing Artificial Intelligence Companies

The computer vision market was valued at $12.2 billion in 2020. Expected to maintain a compound annual growth rate (CAGR) of 6.4% over the forecast period from 2020 to 2027, its expected to reach $18.9 billion by the end of it.

Regionally, the global computer vision market is forecast to grow as follows:

By vertical, the industrial segment accounted for 51% of the global computer vision market revenue in 2020, covering industries from automotive and consumer electronics to packaging and machinery.

Other notable industries include:

Computer vision technology application in business is still relatively low. However, a 2021 IDG/Insight survey found that while only 10% of organizations are currently using computer vision, 81% are in the process of investigating or implementing the technology.

Participants in the survey from various industries are looking to use computer vision to improve organization security and employee safety conditions.

Computer vision is starting to change society and the whole world as it becomes ubiquitous. Autonomous vehicles and other industries rely on this technology to increase human capacity, says Abhinai Srivastava, member of the Forbes Technology Council.

Reaching the full potential of computer vision will be possible once we can transition from research labs into the real world.

See more: The Artificial Intelligence (AI) Market

Computer vision combines the capabilities of AI and deep learning, forming neural networks that enable computers to process and analyze image and video data.

Systems can be trained using different models for various purposes, from specific object detection to image classification to facial recognition.

Computer vision techniques include:

Object detection is responsible for finding and identifying objects in imaging. Using deep learning and machine learning algorithms, this type of computer vision can detect and identify the characteristics of objects in various forms.

Object detection is most commonly used in manufacturing, warehousing, and stocking. A single, high-quality image of numerous objects can be broken down in the quantity and type of objects.

Object Tracking techniques are capable of detecting multiple objects in a video. Object Tracking computer vision algorithms can be trained to detect and track a specific subset of objects, such as faces, pedestrians, or a species of animal.

While unable to differentiate between the objects it detects, object tracking can be used in self-driving cars and navigation technology.

Instead of focusing on parts of an image, image classification is concerned with labeling an image in its entirety.

When looking for a specific element of a picture, imagine classification can be used in medical imaging, traffic control, and search engines.

Semantic segmentation attempts to understand an image beyond its main components. By dividing the image into groups of pixels, the computer vision model can identify objects within an image, as well as the differences between them.

While object detection is only able to give the approximate location of an object, semantic segmentation takes things a step further by finding the objects boundaries in the image, and as a result, its specific location.

Instance segmentation is able to identify every object instance for every object within an image or video. Its able to detect and mask the object in question, one pixel at a time.

Advanced instance segmentation models can handle overlapping objects and background elements. By identifying the objects and setting their boundaries, instance segmentation ensures the size and distance of an object are more accurate.

As a field of AI, computer vision is another technique meant to make devices, software, and machines smarter and more autonomous.

Different levels of computer vision, specializing in different subfields of vision offer various benefits in their applications, such as:

The developments with computer vision in recent years were facilitated by machine learning technology in particular, the iterative learning process of neural networks and significant leaps in computing power, data storage, and high-quality yet inexpensive input devices, says Bernard Marr, author, and strategic business and technology advisor.

There are endless applications where the ability to extract meaning from seeing visual data is useful. Computer vision combines with other technologies, such as augmented and virtual realities to enable additional capabilities.

Thanks to its countless applications and capabilities, computer vision technology is used by companies and organizations in various industries.

Solera Holdings is a provider of data, applications, and financial services for the automotive and insurance industries. Founded in 2005, Solera now manages over 300 million financial transactions annually with a team of 6,500 professionals worldwide.

Solera Holdings carries a massive database of damage claims in images and videos that require careful processing for settlements and payments.

Using Google Cloud AI/ML products, Solera launched Qapter in 2020, an intelligent solution designed for the entirety of the vehicle claims cycle.

Insurance companies had encountered a number of challenges in trying to commercialize computer vision solutions. They would do their research projects, and could usually build a working solution in-house, but they couldnt scale. What we learned from this is the importance of building a productized solution to avoid failing as an AI project, says Marcos Malzone, Vice President of Product Management at Solera Holdings.

Using visual data from insurance claims, Solera Holdings was able to offer a faster and more accurate cost estimation for the drivers, insurance providers, and automotive technicians.

Amsterdam University Medical Centers (UMC) is one of the leading international centers in academic medicine in the Netherlands. Based in a University, Amsterdam UMC is responsible for providing treatments for its patients, academic medical research, and providing medical education for enrolled students.

Home to one of Europes largest oncology centers, Amsterdam UMC regularly collects massive amounts of data on its patients, ranging from standard patient records to biomarkers, DNA, and genomic data.

Working with SAS, Amsterdam UMC was able to employ computer vision and predictive analytics in order to identify cancer patients. The AI model provides researchers and physicians with a 3D representation of each tumor and its volume once detected.

Were now capable of fully automating the response evaluation, and thats really big news. The process is not only faster but more accurate than when its conducted by humans, says Dr. Geert Kazemier, Professor of Surgery and Director of Surgical Oncology at Amsterdam UMC.

There are a lot of people working with the SAS platform who do not have analytic or data science training. This is the next phase of analytics for us, and I see tremendous opportunities ahead, adds Dr. Kazemier.

Thanks to SAS computer vision and analytics, the researchers at Amsterdam UMC were able to obtain test and research results faster, and detect various forms of cancer in early stages in patients with more research to come.

TripleLift is a programmatic advertising technology company that develops complete advertising campaigns for clients in a wide variety of industries. Founded in 2012 in New York, it provides 13 formats of TV, video, and branded content advertising material.

As media consumers demanded shorter and fewer ads, TripleLift used machine learning to composite non intrusive brand ads onto select scenes of TV and streaming shows. It uses computer vision to analyze video content to determine the moment and location of ad insertion.

AWS solutions can do the work in about half the time it would take a human to do so manually. Now our creative team has more time to do creative work, not just watch videos, says Luis Bracamontes, computer vision and ML engineer at TripleLift.

As we receive more volume, the solution generates insertions faster. So it both saves time and scales to help us manage high volumes of content, adds Bracamontes.

Using Amazon Rekognition and Amazon SageMakeralong with other AWS solutionsTripleLift was able to build a video analysis infrastructure in less than 6 months and reduce video analysis time by 50%.

Some of the leading players in the global computer vision market include:

See more: Artificial Intelligence Trends

Read more here:
The Computer Vision Market in 2022 - Datamation

Read More..

Corporate Leaders Need To Upgrade Tech Skills To Stay In The Game – Outlook India

Over the last few years, emerging technologies have forced traditional business models to adapt to the changing environment driven by artificial intelligence, machine learning, data analytics and robotics today.

However, many tech and HR experts feel that a substantial percentage of Indian business leaders need to upskill their technical know-how to keep pace with the technology.

Aman Atri, who has served as HR head in companies like Gillette and Reliance, says that 40 per cent to 50 per cent of corporate leaders from various business verticals are not up to date with the application of new technologies. I do not mean to say that corporate leaders should know and learn how to develop applications and machines, but they should know how the use of latest technology in their respective businesses can help them remain relevant and competitive in the market, he says.

Over time, it has become evident that leaders across sectors will have to upskill as far as tech is concerned.

Take the online retail market, for instance. Today, when consumers shop on any app or website, it gives them recommendations based on their buying behaviour and history. It also tells them what they might want to buy in addition to the items in their carts and follows it up with related similar products that people normally buy. This is all based on a machine learning algorithm which runs behind the scenes and captures the buying pattern, history and behaviour of the consumers.

Experts say that many online retail companies are receiving up to 30 per cent more revenue just because of machine learning algorithms which help them to upsell and cross-sell products.

Now, the customers know the app is going to recommend what they should buy. Sometimes, people look for advice also. This is one of the instances how business leaders from the retail sector can stay ahead in the rapidly evolving field of technology, says a retail sector expert.

A similar shift can be noticed in the healthcare and medical sciences area as well where machine learning is helping doctors in the early diagnosis of critical illnesses by looking at MRI and CT scans.

Here, with the help of engineers, the machine learning algorithm is trained by doctors to pick the signs of certain kinds of illnesses which can be detected early on and in a much better way by machines, enabling faster treatment by the doctors and the better chances of recovery. To add to that, the success rate has also been fairly good.

Gerald Jaideep, CEO, Medvarsity Online, a company that offers advanced medical online courses, is of the view that while Indians have been proudly steering the leading tech giants of the world and also driving innovations back home, the technology landscape in India is far from being seamless and evenly distributed.

He points out that even though a fully autonomous business model is yet to see the light of day in the country, corporations are inventing, integrating and even retrofitting automated systems and processes into the value chainfrom data processing to collaborations.

For the leadership and the teams to work more effectively and creatively, they must be able to interact with technology in a natural and fluid way. The ability to interface with technology can help achieve that by offering novel solutions to new-age challenges and have a greater business impact, says Jaideep.

Having said that, human resource experts also believe that more corporate leaders are making efforts to understand the needs of the future workforceTata Consultancy Services (TCS) being a good case in point.

Realising the need to make business leaders and senior managers understand how the application of machine learning will grow and transform its businesses, TCS, along with DeakinCo., the corporate learning and development division of Deakin University, one of the leading universities in Australia, recently co-developed a corporate learning programme. It meets the growing need to understand, manage and progress adoption and application of emerging technologies such as artificial intelligence, machine learning and the internet of things, to name a few.

This collaboration between DeakinCo. and TCS brings together unique academic and industry expertise in technology areas. This will help business leaders and decision-makers with non-IT backgrounds enhance their skills and grow their businesses by applying new technologies, says Ankur Mathur, head, education business, TCS.

Glenn Campbell, CEO, DeakinCo., has a very particular reason behind the collaboration. What we found was that the level of understanding about emerging technologies is not that great among business leaders who come from a non-technical background. That is the reason why we co-developed and launched this programme, he says

Besides, many educational institutions have also started foraying into this area and enabling the business world to catch up with technological advancements.

Original post:
Corporate Leaders Need To Upgrade Tech Skills To Stay In The Game - Outlook India

Read More..

TCS AI-Powered Software for Sustainable Smart Cities, Enterprises and Customer Analytics Now on Azure Marketplace – Smart Cities Dive

NEW YORK

TCS AI-Powered Software for Smart Cities and Customer Analytics Now on Azure Marketplace: TCS Intelligent Urban Exchange and TCS Customer Intelligence & Insights Software Empower Businesses and Governments to Deliver Hyper-Personalized Customer Experiences, Resilient Enterprises, Smarter Cities, and Sustainable Operations.

Tata Consultancy Services (TCS) announced the availability of its TCS Intelligent Urban Exchange (IUX) and TCS Customer Intelligence & Insights (CI&I) software in the Microsoft Azure Marketplace.

CI&I customer analytics helps banks, retailers, insurers, and other businesses take advantage of AI, machine learning, and customer data platform capabilities to deliver hyper-personalized consumer and citizen experiences, while protecting privacy and ensuring consent. Organizations can use CI&I to surface insights, predictions, and recommended actions and offers in real time to improve customer satisfaction.

IUX helps enterprises and cities meet sustainability goals and elevate citizen and employee experiences by optimizing services and enabling infrastructure to respond to predicted and dynamic events in real time. Harnessing data across operational silos, IUX applies AI and machine learning to simulate evolving scenarios, enabling services and IoT infrastructure to take appropriate actions. IUX modules include intelligent building energy, sustainability, streetlights, transportation, water management, energy and resources operations, command center, and workplace resilience.

Enterprises must go beyond transforming their technology. They must make a meaningful difference to the customers and communities they serve, saidAshvini Saxena, Vice President and Global Head, TCS Components Engineering Group and Digital Software & Solutions. This has made sustainable, customer-centric initiatives powered by AI a business imperative. TCS IUX and CI&I on Microsoft Azure will make it easier for businesses and governments to deploy exciting digital transformation initiatives.

TCS is a Microsoft Gold Partner with over 1,000 successful Azure engagements for more than 225 global customers. TCS recently won the 2021 Microsoft Partner of the Year Awards for Azure Intelligent Cloud in France and Dynamics 365 Field Service in the U.S. and is a designated Microsoft Azure Expert Managed Service Partner.

For more information on TCS Intelligent Urban Exchange, visithttps://www.tcs.com/smart-city-solutions

For more information on TCS Customer Intelligence & Insights, visithttps://www.tcs.com/solutions-customer-intelligence-insights

Visit the Azure Marketplacelistings for more information:

TCS IUX: https://azuremarketplace.microsoft.com/en-us/marketplace/apps/tataconsultancyservices-er.tcs_intelligent_urban_exchange_iux?tab=Overview

TCS CI&I: https://azuremarketplace.microsoft.com/en-us/marketplace/apps/tataconsultancyservicesltd-cii.customer_intelligence_and_insights?tab=Overview

###

About Tata Consultancy Services (TCS)

Tata Consultancy Services is an IT services, consulting and business solutions organization that has been partnering with many of the worlds largest businesses in their transformation journeys for over 50 years. TCS offers a consulting-led, cognitive powered, integrated portfolio of business, technology and engineering services and solutions. This is delivered through its unique Location Independent Agile delivery model, recognized as a benchmark of excellence in software development.

A part of the Tata group, India's largest multinational business group, TCS has over 556,000 of the worlds best-trained consultants in 46 countries. The company generated consolidated revenues of US $22.2 billion in the fiscal year ended March 31, 2021 and is listed on the BSE (formerly Bombay Stock Exchange) and the NSE (National Stock Exchange) in India. TCS' proactive stance on climate change and award-winning work with communities across the world have earned it a place in leading sustainability indices such as the MSCI Global Sustainability Index and the FTSE4Good Emerging Index. For more information, visit http://www.tcs.com.

See original here:
TCS AI-Powered Software for Sustainable Smart Cities, Enterprises and Customer Analytics Now on Azure Marketplace - Smart Cities Dive

Read More..

AI In Drug Discovery – Food and Drugs Law – UK – Mondaq

To print this article, all you need is to be registered or login on Mondaq.com.

Developing new or more effective drugs for treating medicalconditions can revolutionise care, and drug discovery is a hugepart of the business of pharmaceutical companies. However, findingwhich drugs are effective for treating which conditions isdifficult. Identifying and screening candidate drugs is typicallyextremely time-consuming, which makes the search for new drugsslow, uncertain, and very expensive.

In modern science, this is not for lack of data. Plenty of dataexists on how small molecules interact with biological systems suchas proteins. However, sorting through all this data to findpromising combinations of molecules and biological pathways totreat particular conditions is very slow. Machine learning offers away to overcome this problem.

We reported recently on Alphafold a machine-learning toolcapable of predicting protein structures with much greaterreliability than previous tools. Other programs already exist thatcan predict the structures of small molecules, which are mucheasier to determine from their chemical composition than thestructures of proteins. Based on the predicted structures ofproteins and small molecules, machine-learning can predict theirinteractions, and work through libraries of molecules to identifycandidate drugs much more quickly than would be possible with humaneffort alone.

This type of processing can identify entirely novel drugs, butmay also be used to identify new applications of existing drugs.Identifying new uses of existing drugs can be particularlyvaluable, since manufacturing capacity and detailed data on sideeffects may already exist that can allow the drug to more rapidlybe repurposed to treat a new condition.

Machine learning can not only identify molecules likely tointeract with a target protein, but may also be able to extrapolateproperties such as toxicity and bio-absorption using data fromother similar molecules. In this way, machine-learning algorithmscould also effectively carry out some of the early stages of drugscreening in silico, thereby reducing the need for expensive andtime-consuming laboratory testing.

Other applications of machine learning in drug discovery includepersonalised medicine. A major problem with some drugs is thevarying response of different individuals to the drug, both interms of efficacy and side-effects. Some patients with chronicconditions such as high blood pressure may spend months or yearscycling through alternative drugs to find one which is effectiveand has acceptable side effects. This can represent an enormouswaste of physician time and create significant inconvenience forthe patient. Using data on the responses of thousands of otherpatients to different drugs, machine learning can be used topredict the efficacy of those drugs for specific individuals basedon genetic profiling or other biological markers.

Identifying candidate drugs as discussed above relies on knowingwhich biological target it is desirable to affect, so thatmolecules can be tested for their interaction with relevantproteins. However, at an even higher level, machine learningtechniques may allow the identification of entirely novelmechanisms for treating medical conditions.

Many studies exist in which participants have their genetic datasequenced, and correlated with data on a wide variety of differentphenotypes. These studies are often used to try to identify geneticfactors that affect an individual's chance of developingdisease. However, machine learning techniques can also identifycorrelations between medical conditions and other measurableparameters, such as expression of certain proteins or levels ofparticular hormones. If plausible biological pathways can bedetermined using these correlations, this could even lead to theidentification of entirely new mechanisms by which certainconditions could be treated.

Examples of AI-based drug discovery already exist in the realworld, with molecules identified using AI methods having enteredclinical trials. Numerous companies are using AI technology toidentify potential new drugs and predict their efficacy forindividual patients. Some estimates suggest that over 2 billion USDin investment funding was raised by companies in this technologyarea in the first half of 2021 alone. As with any technology,patents held by these companies allow them to protect theirintellectual property and provide security for them and theircommercial partners.

Machine learning excels at identifying patterns and correlationsin huge data sets. Exploiting this ability for drug discovery hasthe potential to dramatically improve healthcare outcomes forpatients, and streamline the unwieldy and expensive process ofdeveloping new treatments. We may stand on the threshold of a newera of personalised medicine and rapid drug development.

The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.

POPULAR ARTICLES ON: Food, Drugs, Healthcare, Life Sciences from UK

Goodwin Procter LLP

2021 was a banner year for the women's health and wellness industry as global venture capital investment in FemTech companies surpassed $1B for the first time.

Arnold & Porter

In December 2020, we posted about the MHRA's draft guidance on randomised controlled trials generating real-world evidence (RWE) to support regulatory decisions.

Read more from the original source:
AI In Drug Discovery - Food and Drugs Law - UK - Mondaq

Read More..

FormFactor (FORM) Expands Customer Base With SEEQC Partnership – Zacks Investment Research

FormFactor, Inc. (FORM Quick QuoteFORM - Free Report) is consistently working toward gaining momentum among customers on the back of its robust portfolio of solutions.

This is evident from the fact that FormFactors quantum cryogenic measurement solution recently got selected by a digital quantum computing company named SEEQC.

The measurement solution includes the sub-50mK HPD Model 106 adiabatic demagnetization refrigerator as well as the PQ500 RF and DC probe socket. The solution further complements sub-10mK dilution refrigerators to speed up cryogenic test cycles by more than two times.

With FormFactors quantum cryogenic measurement solution, SEEQC aims to accelerate its quantum computing research and development program.

SEEQC is already using FormFactors probe socket solution to remove wire-bonding for qubits, SFQ circuits and multi-chip modules to accelerate the process of device characterization for final quantum testing.

With the recent partnership, both companies strive to focus on addressing the challenges related to test and measurement in the emerging quantum computing industry.

The latest collaboration with SEEQC bodes well for FormFactors growing efforts toward expanding its presence in the rising quantum computing market.

Apart from this recent move, FORM had last year released the HPD IQ1000, a scanning SQUID (Superconducting Quantum Interference Device) microscope, which delivers cryogenic system integration and automation to help device designers accelerate both quantum research and higher-volume engineering.

The global quantum computing market is witnessing growth owing to the early adoption of quantum computing in the banking and finance sector. Growing investments by governments of different countries for carrying out research and development activities related to quantum computing technology are further driving the market.

Per a Fortune Business Insights report, the underlined market is expected to hit $3.2 billion in 2028, witnessing a CAGR of 30.8% between 2021 and 2028.

The market is likely to touch $1.8 billion by 2026, seeing a CAGR of 30.2% from 2021 to 2026, according to a Markets and Markets report.

The recent selection of quantum cryogenic measurement solution by SEEQC adds strength to FormFactors customer base. Further, it highlights the efficiency and reliability of the underlined solution.

FormFactor keeps bringing advanced technology-based test and measurement solutions to better serve its customers.

FORM recently unveiled the TESLA300 high power semiconductor probing system, suitable for IGBT and power MOSFET device measurements. The probing system incorporates new anti-arcing and wafer automation features to enable high-throughput, unattended testing over a wide thermal range to accelerate development and minimize the production cost of power devices.

Further, FormFactor, in collaboration with Northrop Grumman Corporation, introduced a fully automated cryogenic wafer probe system that operates at 4 Kelvin and below. This ramps up the production, eventually fast-tracking the development process of superconducting compute applications.

Thus, Formfactors strong efforts toward portfolio offerings are likely to help it expand its reach among the customers, which in turn, might contribute well to its top-line growth in the days ahead.

Currently, FormFactor carries a Zacks Rank #3 (Hold). However, the stock has declined 11.9% in the year-to-date period compared with the Computer and Technology sectors fall of 24.6%

Investors interested in the broader Zacks Computer & Technology sector can consider some better-ranked stocks likeAvnet(AVT Quick QuoteAVT - Free Report) , Sierra Wireless(SWIR Quick QuoteSWIR - Free Report) and Airgain(AIRG Quick QuoteAIRG - Free Report) . While Avnet sports a Zacks Rank #1 (Strong Buy), Sierra Wireless and Airgain carry a Zacks Rank #2 (Buy) at present. You can seethe complete list of todays Zacks #1 Rank stocks here.

Avnet has gained 16.6% in the year-to-date period. The long-term earnings growth rate for AVT is currently projected at 37.2%.

Sierra Wireless has gained 40.5% in the year-to-date period. The long-term earnings growth rate for SWIR is currently projected at 15%.

Airgain has gained 3% in the year-to-date period. The long-term earnings growth rate for AIRG is currently projected at 35%.

Follow this link:
FormFactor (FORM) Expands Customer Base With SEEQC Partnership - Zacks Investment Research

Read More..

Singapore sets the pace for quantum computing in Southeast Asia – Tech Wire Asia

As innovation in the quantum computing industry continues, Singapore has committed itself to being part of the journey as well. Last week, the Singaporean government announced its increasing investments in the industry by setting up two new initiatives to boost talent development and provide better access to the technology.

The National Quantum Computing Hub in Singapore will pool expertise and resources from the Centre for Quantum Technologies and other institutions. Heng Swee Keat, Singapores Deputy Prime Minister also unveiled the National Quantum Fabless Foundry, which is expected to develop the components and materials needed to build quantum computers and devices.

Quantum computing has already taken the world by storm in recent times, especially with more governments already announcing specific moves in the industry. While those involved in the development of quantum computing tend to see it more as a collective initiative to improve the technology, some governments see it as a means to be better than each other, giving them an upper edge.

Case in point, both China and the US continue to be competing with each other in the field. While there are many areas in quantum computing, both countries continue to find innovations in them as well. China claims to have the worlds fastest and most powerful quantum computer but the US is also claiming to have achieved these.

Singapore Deputy Prime Minister and Coordinating Minister for Economic Policies Heng Swee Keat talks during the opening of Asia Tech Summit in Singapore.(Photo by Roslan RAHMAN / AFP)

Last month, US President Joe Bidensigned a National Security Memorandum aimed at maintaining U.S. leadership in quantum information sciences and mitigating the risks of quantum computing to the Nations security. For the US, a quantum computer of sufficient size and sophistication also known as a cryptanalytically relevant quantum computer will be capable of breaking much of the public-key cryptography used on digital systems across the United States and the world.

Despite these concerns about the technology, scientists and engineers in the field are focused on bringing the best out of quantum computing. As such, the hub in Singapore, which will also host the countrys first quantum computer, will allow both government agencies and enterprises to access and test it out directly.

According to a report by Singapores Straits Times, both the hub and fabless foundry will be part of the National Research Foundations Quantum Engineering Programme (QEP). In February, Singapore also announced the National Quantum Safe-Network. The three initiatives will receive at least SG$23.5 million from the QEP for up to 3 years under Singapores Research, Innovation and Enterprise 2020 plan.

Dr Si-Hui Tan, Chief Science Officer at Horizon Quantum Computing

With all the hype and excitement in the industry, Tech Wire Asia caught up with Dr Si-Hui Tan, Chief Science Officer at Horizon Quantum Computing at the ATxSG summit to get her views on the industry, from the technology to the talent development in the field.

For Dr Tan, while Singapore and the rest of the world are getting hyped up about quantum computing and how it can change the planet, the reality is, that there is still a long way to go before any real-world use cases can be implemented.

While quantum computing is still a very nascent-state technology, the news in Singapore will see some acceleration in adoption in this area. When I say adoption, I still have to be very cautious on how we can use a quantum computer because we still do not have a real-world use case, with the existing limit of technology that we have, commented Dr Tan.

Dr Tan pointed out that nobody knows for sure how long the world is from seeing a real-world quantum computing use case. She added that while there is a report by some prominent physicists from across the world that tries to capture the belief about when the use of quantum computing will come about, she feels the real answer is no one knows for sure. For now, there are only some bits of the technology being used.

At the same time, the competition in quantum computing is not the main issue for her. Despite strategic interest from some nation-states, Dr Tan explained that since the technology is nascent, the resources of any one country or organization are not enough to get it going on its own agenda.

With that said, Dr Tan also highlighted the concerns about the cost of research and development for quantum computing, especially with some organizations feeling the technology can be costly to implement and maintain in the long run.

When you look at R&D, there will always be a cost. Once there is widespread adoption, commercialization will come in. For example, one of the applications of quantum computing is quantum communication. You can send information through fiber options at 1440 nanometres. If the current fiber optic network in the world uses that same wavelength, the network can use it for quantum computing with some tweaking. The cost is lowered as you are leveraging existing infra. This is one way of reducing costs and we will start to see similar trends in quantum computing, explained Dr Tan.

For enterprise use cases like high-performance computing (HPC), Dr Tan believes while the real-world application is not there yet, the technology could affect everything with HPC. Businesses that have not adopted the technology could find themselves losing out in the future.

Looking at the way things are moving, you may see more industry players come in to look at problems, to see how quantum computing can be used to solve them. This includes the possibility of having quantum computing as the new form of HPC, mentioned Dr Tan.

With Singapore looking to be a leader in this field, the next question that comes to everyone is whether the country has sufficient talent in the field. Interestingly, Dr Tan mentioned that while there is indeed a shortage globally in the field, the uptake of the course in universities and quantum computing research has been positive.

Its very encouraging. A lot of information is available on quantum computing today, especially in reaching out to youngsters. Researchers are producing and sharing content on YouTube for example. The availability of the information will see more interested in the industry, said Dr Tan.

The three national quantum platforms that were announced are also hosted across the National University of Singapore, Nanyang Technological University, Singapore, the Agency for Science, Technology and Research, and the National Supercomputing Centre (NSCC) Singapore. They will coordinate activities across research organizations and build public-private collaborations to put Singapore at the cutting edge in quantum technologies.

Aaron Raj

Aaron enjoys writing about enterprise technology in the region. He has attended and covered many local and international tech expos, events and forums, speaking to some of the biggest tech personalities in the industry. With over a decade of experience in the media, Aaron previously worked on politics, business, sports and entertainment news.

See the article here:
Singapore sets the pace for quantum computing in Southeast Asia - Tech Wire Asia

Read More..

Quantum Computing : Q1 reports initial revenue. We expect major acquisition of QPhoton to be positive for stock. Lowering P/T to $9. -…

Q1 initial revenue: Quantum recently (on May 23) reported its fiscal Q1 2022 (ending March) results. The company reported initial revenue of $0.03 million. EPS was $(0.24), compared to our estimate of $(0.16). There was no consensus estimates or company guidance.

Still very early stage: Quantum's recent financial performance is reflective of its developmental and early commercialization stage and has finally reported initial revenue. The company, having recently launched several of its initial products, is currently focusing on sales and marketing of its products. We believe investors should be focused on its commercialization of its software, which we believe within the next year, the company should grow revenue quickly.

Lowering estimates: We are lowering our 2022 estimates for revenue $1.0 million, from $1.5 million, and for EPS to$(0.73) from $(0.59). Our estimates do not reflect its pending acquisition.

To acquire QPhoton: In May, the company announced that it will be acquiring QPhoton, Inc. QPhoton is a privately held company that is a leading innovator in the quantum photonic technology space. Merger consideration will be paid in stock (~37 million shares). The deal is expected to close in 2H 2022.

Investment in QPhoton: In February, the company announced a marketing agreement with QPhoton, to merge QCI's quantum software solution, Qatalyst, with QPhoton's advanced photonic quantum technologies for its application to QCI-specific solutions. As part of this agreement, Quantum will invest $2.5 million in QPhoton.

Focused on quantum computing: Quantum's flagship software solution, Qatalyst, is a ready-to-run quantum and classical software for optimization computations for faster, better, and more diverse business decisions. By being early in this rapidly growing industry, we believe Quantum is well-positioned to capture and drive a meaningful market share and industry growth.

The need for quantum computing: The rapid and widespread adoption of technologies such as the Internet, artificial intelligence, virtual and augmented reality, 3D imaging, and the Internet of Things (IoT), have served to exponentially increase the generation of data. This has driven up the demand for high-performance computing to process all this data.

Large market potential: As quantum computing hardware continues to advance, we expect a corresponding growth in demand for quantum software to run on these computers. The U.S. Government has committed $1.3 billion to funding quantum information science programs.

Balance sheet: In Q4, the company raised $8.5 million selling preferred stock. We believe the company has enough cash into 2023.

Positive high risks versus rewards: Overall, concerns outweighed by growth prospects and valuation. We believe the ~billion dollars market potentials presents high rewards for the risks.

Valuation attractive: We are maintaining our BUY rating, but lowering our 12-month price target to $9.00 from $10, based on a NPV analysis, representing significant upside from the current share price. We believe this valuation appropriately balances out the company's high risks with the company's high growth prospects and large upside opportunities.

To learn more about QCI and how Qatalyst can deliver results for your business today, go to http://www.quantumcomputinginc.com.

About Quantum Computing Inc.

Quantum Computing Inc. (QCI) (NASDAQ: QUBT) is accelerating the value of quantum computing for real-world business solutions. The company's flagship product, Qatalyst, is the first software to bridge the power of classical and quantum computing, hiding complexity and empowering SMEs to solve complex computational problems today. QCI's expert team in finance, computing, security, mathematics and physics has over a century of experience with complex technologies; from leading edge supercomputing, to massively parallel programming, to the security that protects nations. Connect with QCI on LinkedIn and @QciQuantum on Twitter. For more information about QCI, visit http://www.quantumcomputinginc.com.

About QPhoton

QPhoton is a quantum photonics innovation company. It develops and commercializes powerful quantum nanophotonic technology and systems to transform critical areas of industry, including healthcare, cybersecurity, finance, environment, and computer vision. QPhoton maintains a growing and diverse portfolio of patented nanophotonic and quantum technology, covering quantum sensing, imaging, information privacy, authentication, data analytics, and quantum photonic computing.

Important Cautions Regarding Forward-Looking Statements

This press release contains forward-looking statements as defined within Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934, as amended. By their nature, forward-looking statements and forecasts involve risks and uncertainties because they relate to events and depend on circumstances that will occur in the near future. Those statements include statements regarding the intent, belief or current expectations of Quantum Computing Inc. (the "Company"), and members of its management as well as the assumptions on which such statements are based. Prospective investors are cautioned that any such forward-looking statements are not guarantees of future performance and involve risks and uncertainties, and that actual results may differ materially from those contemplated by such forward-looking statements.

Statements in this press release that are not descriptions of historical facts are forward-looking statements relating to future events, and as such all forward-looking statements are made pursuant to the Securities Litigation Reform Act of 1995. Statements may contain certain forward-looking statements pertaining to future anticipated or projected plans, performance and developments, as well as other statements relating to future operations and results. Any statements in this press release that are not statements of historical fact may be considered to be forward-looking statements. Words such as "may," "will," "expect," "believe," "anticipate," "estimate," "intends," "goal," "objective," "seek," "attempt," "aim to," or variations of these or similar words, identify forward-looking statements. Such statements include statements regarding the Company's ability to consummate its planned acquisition of QPhoton, the anticipated benefits of such acquisition, and the Company's ability to successfully develop, market and sell its products. Factors that could cause actual results to differ materially from those in the forward-looking statements contained in this press release include, but are not limited to, the parties' potential inability to consummate the proposed transaction, including as a result of a failure to satisfy closing conditions to the proposed transactions; risks that QPhoton will not be integrated successfully; failure to realize anticipated benefits of the combined operations; potential litigation relating to the proposed transaction and disruptions from the proposed transaction that could harm the Company's or QPhoton's business; ability to retain key personnel; the potential impact of announcement or consummation of the proposed transaction on relationships with third parties, including customers, employees and competitors; conditions in the capital markets; and those risks described in Item 1A in the Company's Annual Report on Form 10-K for the year ended December 31, 2021, which is expressly incorporated herein by reference, and other factors as may periodically be described in the Company's filings with the SEC. The Company undertakes no obligation to update or revise forward-looking statements to reflect changed conditions.

Qatalyst is the trademark of Quantum Computing Inc. All other trademarks are the property of their respective owners.

Company Contact:

Robert Liscouski, CEO

Quantum Computing, Inc.

+1 (703) 436-2161

Email Contact

Investor Relations Contact:

Ron Both or Grant Stude

CMA Investor Relations

+1 (949) 432-7566

Email Contact

Media Relations Contact:

Seth Menacker

Fusion Public Relations

+1 (201) 638-7561

qci@fusionpr.com

See the rest here:
Quantum Computing : Q1 reports initial revenue. We expect major acquisition of QPhoton to be positive for stock. Lowering P/T to $9. -...

Read More..