Page 2,121«..1020..2,1202,1212,1222,123..2,1302,140..»

The Genesis of Artificial Intelligence and Digital Twins – InformationWeek

Artificial intelligence, machine learning, and digital twins -- why are we hearing so much about them and why do they suddenly seem critical? The simplest explanation is this: When something is too complex for a human to easily process or there is too little time for a human to make a critical decision, the only choice is to remove the human. That requires the ability to replicate the thought process a human might go through, which requires a lot of data and a deep understanding of the decision environment.

So why now? For decades, we saw huge advancements come primarily from the integration and shrinking of electronics. Smaller products, consuming less power, and offering dramatic increases in functionality per square inch were the hallmarks of technology progress.

Software applications also have evolved over the decades, one of the most notable ways being the dramatic acceleration of the application adoption cycle. In the past two decades alone, users have shifted at alarmingly fast rates from treating applications as novelties, to using them as a convenience, and then to expecting them to work flawlessly all the time. At each adoption stage, a users expectation rises, meaning the product must evolve and mature at very fast, scalable rates.

The combination of the hardware and software trends formed a convergence of product development requirements. New critical need applications suddenly must feature higher capacity of real-time processing, time-sensitive decision-making, high to very high availability, and expectations that platform-generated decisions be correct, every time.

While most people think of AI primarily as an end-user resource, AI has become necessary for faster product design and development. From the earliest stage of a chipset design or layout of a circuit through end-product validation, emulators have become necessary for building complex interfaces and environments. These emulators, known as digital twins, are a virtual manifestation of a process, environmental condition, or protocol capable of serving as a known good signal. In test terms, a digital twin can be a simple signal generator, a full protocol generator or a complete environment emulator. Digital twins allow developers to rapidly create a significantly wider range of test conditions to validate their product before shipping. High-performance digital twins typically contain their own AI engines for troubleshooting and regression testing new product designs.

The shift to AI-driven development and digital twins has become necessary due to the amount of functionality and autonomous decision-making expected in new products. Basic design principles specify features and functionality of a product, then set up tests to validate them. The sheer number and complexity of interface standards makes that virtually impossible to construct by hand. By using digital twins, a much wider set of functional tests can be programmed in much less time. AI functionality then automates test processes based on what it discovers and predicts actions that might be needed. To understand this better, its useful to understand the core of what makes any AI possible.

In its simplest form, software decision-making starts with algorithms. Basic algorithms run a set of calculations, and if you know what constitutes acceptable results, you can create a finite state machine using decision tree outcomes. This would hardly be considered intelligent. By adding a notation of state, however, and inserting a feedback loop, your basic algorithm can make outcome decisions a function of the current conditions compared to the current state. Combine this while evolving the decision tree into a behavior tree and you have formed the genesis of AI.

The need for AI and digital twins is real, and when you question the veracity of one -- yours or someone elses -- go back to its genesis, otherwise known as the data. Data source(s) are the foundation of any digital assessment tool, and those sources determine the potential of an algorithms modeling accuracy. If multiple data-rich sources are available, then the accuracy potential is high. If only basic data is available, the resulting algorithm or digital twin will not be accurate. This is something you can assess yourself.

We are at the early stage of AI, which means lots of products will be making lots of claims. Understanding what a product is supposed to deliver will allow you to assess it. Understanding which data sources it processes will tell you how accurately it can deliver the results the vendor promises. Digital twins are much further along in maturity -- especially those that emulate specific elements rather than entire ecosystems. Remember, though, that the more finite the environment, the more likely the digital twin will accurately replicate it.

We all want to understand how something works and how it produces its outcomes. With an understanding of the basic elements inside every AI system and digital twin, you can ask questions about their fundamental elements. If you get stuck, use the steps as a guide for questions to ask of the vendor. Most will share all or some of the key background or parameters to help you understand. I they dont, their competitors will.

See the original post:
The Genesis of Artificial Intelligence and Digital Twins - InformationWeek

Read More..

BigBear.ai to Highlight Artificial Intelligence and Machine Learning Capabilities at Upcoming Industry Events – Business Wire

COLUMBIA, Md.--(BUSINESS WIRE)--BigBear.ai (NYSE: BBAI), a leader in AI-powered analytics and cyber engineering solutions, announced company executives are embarking on a thought-leadership campaign across multiple global industry events. The campaign will emphasize how the companys advancements in AI technologies will impact the federal and commercial markets in the coming months.

At these events, BigBear.ai leaders will highlight the capabilities of BigBear.ais newly acquired company, ProModel Corporation, the importance of defining responsible AI usage, and how federal and commercial organizations leverage AI and ML.

The events BigBear.ai is scheduled to address include:

CTMA Partners Meeting May 3-5, 2022: Virginia Beach, VA

Due to the rapid deployment and advancement of sensor technologies, artificial intelligence, and data science, the Department of Defense has turned to a more predictive-based approach to maintaining technology assets. The agencys recently revamped condition-based maintenance plus (CBM+) policy will accelerate the adoption, integration, and use of these emerging technologies while shifting its strategic approach from largely reactive maintenance to proactive maintenance. Participating as part of a panel session to address this trend, BigBear.ai Senior Vice President of Analytics Carl Napoletano will highlight ProModels commercial capabilities and ProModel Government Services legacy capabilities in the federal space.

DIA Future Technologies Symposium May 11-12, 2022: Virtual Event

BigBear.ais Senior Vice President of Analytics, Frank Porcelli, will brief the DIA community about BigBear.ais AI-powered solutions at this virtual presentation. After providing a high-level overview and demonstration of the companys AI products (Observe, Orient, and Dominate), Frank will also offer insights into how AI technologies are being leveraged in the federal sector.

Conference on Governance of Emerging Technologies and Science May 19-20, 2022: Phoenix, Arizona

Newly appointed BigBear.ai General Counsel Carolyn Blankenship will attend the ninth edition of Arizona States annual conference, which examines how to create sustainable governance solutions that address new technologies legal, regulatory, and policy ramifications. During her presentation, Carolyn will detail the importance of Intellectual Property (IP) law in AI and the responsible use of AI and other emerging technologies. Prior to starting as General Counsel, Carolyn organized and led Thomson Reuters cross-functional team that outlined the organizations first set of Data Ethics Principles.

Automotive Innovation Forum May 24-25, 2022: Munich, Germany

ProModel was among the select few organizations invited to attend Autodesks The Automotive Innovation Forum 2022. This premier industry event celebrates new automotive plant design and manufacturing technology solutions. Michael Jolicoeur of ProModel, Director of the Autodesk Business Division, will headline a panel at the conference and highlight the latest industry trends in automotive factory design and automation.

DAX 2022 June 4, 2022: University of Maryland, Baltimore County, Baltimore, Maryland

Three BigBear.ai experts - Zach Casper, Senior Director of Cyber; Leon Worthen, Manager of Strategic Operations; and Sammy Hamilton, Data Scientist/Engagement Engineer - will headline a panel discussion exploring the variety of ways AI and ML are deployed throughout the defense industry. The trio of experts will discuss how AI and ML solve pressing cybersecurity problems facing the Department of Defense and intelligence communities.

To connect with BigBear.ai at these events, send an email to events@bigbear.ai.

About BigBear.ai

BigBear.ai delivers AI-powered analytics and cyber engineering solutions to support mission-critical operations and decision-making in complex, real-world environments. BigBear.ais customers, which include the US Intelligence Community, Department of Defense, the US Federal Government, as well as customers in manufacturing, logistics, commercial space, and other sectors, rely on BigBear.ais solutions to see and shape their world through reliable, predictive insights and goal-oriented advice. Headquartered in Columbia, Maryland, BigBear.ai has additional locations in Virginia, Massachusetts, Michigan, and California. For more information, please visit: http://bigbear.ai/ and follow BigBear.ai on Twitter: @BigBearai.

Follow this link:
BigBear.ai to Highlight Artificial Intelligence and Machine Learning Capabilities at Upcoming Industry Events - Business Wire

Read More..

Meta Platforms: Reality Labs, Hardware, And The Artificial Intelligence Story – Seeking Alpha

Fritz Jorgensen/iStock Editorial via Getty Images

About six weeks ago I wrote Meta Platforms (NASDAQ:FB) Buying Frenzy. For simplicity, here's what I said:

Probably more than anything else, I want to emphasize that FB's price has dropped like a rock, but it's a cash pumping machine. Maybe it was overvalued above $300, or even when it was above $250. But now, at $210, it's looking like a fair deal. Maybe a very fair deal, indeed.

That article was mostly a reaction to Reality Labs. Specifically, many investors and analysts have been focusing too much on the $10 billion "loss" there. I think that's misguided, and I said this about Reality Labs:

... it's likely to be either neutral or positive for FB in the long term. In the very same breath, I shall dare to say that Reality Labs barely even matters to the profit generating truth. Simply look at the fundamentals again to confirm this for yourself.

In any case, at the time I wrote that article, FB was trading at $211.37 according to Seeking Alpha. And, today as I write this, FB is just above $212. It almost doesn't seem like anything has changed, but it has. In this article we're going to look closer at some of those big changes. Further, I discuss how valuation, AI and the core business aren't getting proper respect.

FB's price is reflecting a lot of turbulence in the news. Here's a small sample:

This flood of bad news isn't too surprising. It's been rough for growth stocks in general, but FB has been hit particularly hard. In late March and early April, it looked like nearly everything was going wrong. Then, FB reported earnings and FB jumped 13% in afterhours trading. Here's why:

Revenues rose 7% to $27.9 billion, while analysts (even after a number of downward revisions) had forecast 7.8% growth to $28.2 billion.

Net income declined just 21% vs. expectations for a 24% drop, though.

In operating metrics, daily active users rose 4% to 1.96 billion, topping expectations there, while monthly active users rose 3% to a generally in-line 2.94 billion.

This also helped, for those concerned with the "Metaverse" expenses:

The company sees 2022 total expenses in the range of $87 billion-$92 billion, down from a prior outlook for $90 billion-$95 billion

In any case, all of this wasn't just some post-market "good vibes" because on April 28th, FB surged ahead by nearly 18%. This was good enough to be FB's best day ever. Keep in mind, this is juxtaposed by FB's biggest slump ever, back in early February. In any case, FB swung back hard.

All in all, here's what it all looks like, over the last six weeks:

Data by YCharts

In other words, all that additional bad news drove down the price. However, the Q1 earnings generated some good energy into FB. The bad news drove down the price, but the fundamentals, and future outlook, pushed the price back up. Clearly, investors had to stomach a huge wave of volatility.

What's most interesting is that if you looked at FB on March 21st and then looked again on May 3rd, you would think nothing much changed. You might even think it was a boring stock, and bland company. It's a bit funny.

Again, we come back to Reality Labs. Here's big news from The Information.

Meta Plots Ambitious VR Release Schedule of Four Headsets by 2024

The Verge provides some color regarding the hardware:

And, we must also remember FB's recent acquisitions, smartwatch, and other current real-world products. In other words, the Metaverse needs hardware, and FB is working very hard to supply that hardware to consumers.

To be very clear, there has also been a huge investment into infrastructure.

Facebooks 2017 through 2019 investments in data center construction and operations totaled $11.5 billion

And, then this:

Meta spent $5.5 billion on data centers, servers, network infrastructure and office facilities in Q4 2021, as the company continues to expand its infrastructure investments.

Plus, recently this:

Meta, the parent company of Facebook, is investing over $1 billion in building a new data center in Spain to help build the foundation of its metaverse, along with plans to hire 2,000 staff in the region to push development.

The point is that FB is certainly creating software and building out a platform. But it's definitely not all virtual. It requires serious hardware, which isn't sexy, so it doesn't get great coverage.

It's my thesis that Reality Labs isn't nearly as important as FB's artificial intelligence activities. Stated differently, Reality Labs and the Metaverse is really more of a PR activity, but the real meat-and-potatoes is AI, and the related data centers. Not enough investors are aware of this:

Social media conglomerate Meta is the latest tech company to build an AI supercomputer a high-speed computer designed specifically to train machine learning systems. The company says its new AI Research SuperCluster, or RSC, is already among the fastest machines of its type and, when complete in mid-2022, will be the worlds fastest.

Meta has developed what we believe is the worlds fastest AI supercomputer, said Meta CEO Mark Zuckerberg in a statement. Were calling it RSC for AI Research SuperCluster and itll be complete later this year. [Emphasis: Author]

I'm going to try yet again to emphasize the importance of this. It's my belief that Reality Labs doesn't have to do very well at all because it also provides FB with a means to invest in AI.

So, if the Metaverse doesn't work out, FB is still left with a tremendously powerful infrastructure. Specifically, it's building out a very robust AI platform, under the cover of what's popular right now.

Putting that future aside, I think looking backwards tells the real story:

Meta Platforms Fundamental Valuation (FASTgraphs)

The current P/E is around 16. It drops down to around 15 looking out about one year or so. While 2022 isn't looking great, and growth appears to be slowing, FB is still a cash pumping machine. Putting any hyperbole aside, the raw truth is that FB is extremely profitable.

The growth story is hurt, but not dead. I suspect we'll see 16-20% growth in 2023 and 2024. That's good enough for double digit price increases. If history is a guide, FB will rally again. Just look at 2018 and even 2019. There's always been volatility. It's par for the course.

These fundamentals are very exciting to me. I'm much more interested in FB's core business, which is selling ads to eyeballs. That's super profitable. On the other hand, gambling on an "iPhone Moment" via augmented reality, headsets, and such, is far less exciting to me. (Although, the AI behind it all is very sexy.)

Adding it all up, I think FB's real value proposition is simple:

Most of the rest of the story with FB is marketing, hype, and hope. That's necessary to keep energy high, but it's not the reason to invest. FB trading at P/E of 16 is far more compelling. Buying a high quality, high profit, and otherwise undervalued company is the real magic.

Once again, I'm bullish on FB and expect a bright future. So, yes, FB is a buy.

Read more:
Meta Platforms: Reality Labs, Hardware, And The Artificial Intelligence Story - Seeking Alpha

Read More..

Would you buy a work of art created by artificial intelligence? – Domus IT

And indeed: in 2020 The Guardian published A robot wrote this entire article. Are you scared yet, human?. To do this, GPT3, artificial intelligence software created by OpenAI, particularly efficient at producing texts automatically, was used. Two years earlier, Christies had auctioned Portrait of Edmond Belamy, a work created by the collective Obvious using the GAN technique, for $423,500, feeding the system with 15,000 portraits from the 14th to the 20th century, so that the best summary could be chosen. We know what machines are capable of, but not what their intentions are: our assessment as humans is that, so far, they do not have any. Its a subject of interest for readers of science fiction, or those who discuss the difference between brain and mind, also for algorithms: we scientists are more attracted by the consequences of verticals, we want to train computers to solve a problem based on the data we provide.

Here is the original post:
Would you buy a work of art created by artificial intelligence? - Domus IT

Read More..

7 Roles of Artificial Intelligence in the Defence Sector – Robotics and Automation News

Artificial Intelligence has managed to infiltrate many industries and sectors, including the defence sector and different military operations.

Artificial Intelligence is used by almost all nations for managing the defence sector and military operations.

Currently, a huge investment is made in this niche to further strengthen the defence sector of any country.

Here are seven roles of artificial intelligence in the defence sector.

Without an actual war, how would one teach the soldiers about actual war life situations? In such an important situation, the role of Artificial Intelligence is huge.

Artificial Intelligence can be used for creating simulations and training to design different models to train the soldiers to get used to the different fighting systems, which is important for actual military operations.

The navy and army of different countries use Artificial Intelligence to create sensor simulation programmes to help the soldiers.

Such AI is also combined with augmented reality and virtual reality to create more real-life situations.

The defence sector holds much critical and classified information. The sensitive information makes the defence sector extremely prone to cyberattacks.

The defence sector obviously hides its digital footprints by adding a layer of security.

Many times, the defence sector also hides the IP and one can check their IP in What Is My IP. However, normal security is not enough to secure sensitive information.

For providing an added level of security, the military sector often uses Artificial Intelligence. AI plays a critical role in preventing unauthorized intrusion.

It is no secret that surveillance plays an important role in the defence sector and different military operations.

Artificial Intelligence can be used in surveillance for keeping an eye on suspicious activity.

Also, not only is it able to identify suspicious activity but also alerts the respective authorities to tackle the situation. AI-enabled robots also play a critical role in such activities.

Weapons are no longer simple weapons but are new-age weapons. These weapons are commonly embedded with Artificial Intelligence technology.

The application of AI can be most commonly seen in sophisticated missiles which are designed to accurately attack a target.

Military operations often have to deal with logistics too. The logistic operation in the defence sector is not like an ordinary logistic service.

Artificial Intelligence also plays a critical role in ensuring the safety, security and efficiency of the logistic system.

Robots and Artificial Intelligence are combined together to create a Remotely Operated Vehicle which is used for defusing explosives. Sending someone to defuse explosives can be dangerous for obvious reasons.

However, by creating delicate and highly intelligent Remotely Operated Devices, the entire process of defusing explosives can be made safer.

Artificial Intelligence is also used in Network Traffic Analysis. This system mostly monitors the internet traffic, especially the voice traffic passing through different software like Google Talk and Skype.

The voice traffic is then checked for intercept messages with keywords like kill, blast and bomb and that too in real life. This technology is useful in preventing attacks and thus, working towards the safety of the people.

Other usages of Artificial Intelligence in the defence and military sector include analysis of data from different sensors and satellites.

Also, it is used by water ships which use sonar for detecting mines. Military robots, as discussed above, obviously ensure the safety of everyone. AI and machine learning merged to handle unmanned vehicles like battle necks and aircraft.

Usage of Artificial Intelligence in the military is not new. Many developed and developing nations use AI-based technology to strengthen their military operation.

The countries are investing highly in Artificial Intelligence to develop different military infrastructures. The degree of such investment, of course, differs from one country to another.

Even though the financial investment is huge, it is worth the investment. Also, employing Artificial Intelligence requires expertise too. Many scientists, coders and developers work together in a laboratory to employ Artificial Intelligence in military operations.

The challenges of employing Artificial Intelligence in military operations come in the form of money and skills.

However, the same can be addressed by making it a priority. In the coming years, the usage of AI will keep improving in different sectors, including the defence sector.

You might also like

Read this article:
7 Roles of Artificial Intelligence in the Defence Sector - Robotics and Automation News

Read More..

Another Firing Among Googles A.I. Brain Trust, and More Discord – The New York Times

Less than two years after Google dismissed two researchers who criticized the biases built into artificial intelligence systems, the company has fired a researcher who questioned a paper it published on the abilities of a specialized type of artificial intelligence used in making computer chips.

The researcher, Satrajit Chatterjee, led a team of scientists in challenging the celebrated research paper, which appeared last year in the scientific journal Nature and said computers were able to design certain parts of a computer chip faster and better than human beings.

Dr. Chatterjee, 43, was fired in March, shortly after Google told his team that it would not publish a paper that rebutted some of the claims made in Nature, said four people familiar with the situation who were not permitted to speak openly on the matter. Google confirmed in a written statement that Dr. Chatterjee had been terminated with cause.

Google declined to elaborate about Dr. Chatterjees dismissal, but it offered a full-throated defense of the research he criticized and of its unwillingness to publish his assessment.

We thoroughly vetted the original Nature paper and stand by the peer-reviewed results, Zoubin Ghahramani, a vice president at Google Research, said in a written statement. We also rigorously investigated the technical claims of a subsequent submission, and it did not meet our standards for publication.

Dr. Chatterjees dismissal was the latest example of discord in and around Google Brain, an A.I. research group considered to be a key to the companys future. After spending billions of dollars to hire top researchers and create new kinds of computer automation, Google has struggled with a wide variety of complaints about how it builds, uses and portrays those technologies.

Tension among Googles A.I. researchers reflects much larger struggles across the tech industry, which faces myriad questions over new A.I. technologies and the thorny social issues that have entangled these technologies and the people who build them.

The recent dispute also follows a familiar pattern of dismissals and dueling claims of wrongdoing among Googles A.I. researchers, a growing concern for a company that has bet its future on infusing artificial intelligence into everything it does. Sundar Pichai, the chief executive of Googles parent company, Alphabet, has compared A.I. to the arrival of electricity or fire, calling it one of humankinds most important endeavors.

Google Brain started as a side project more than a decade ago when a group of researchers built a system that learned to recognize cats in YouTube videos. Google executives were so taken with the prospect that machines could learn skills on their own, they rapidly expanded the lab, establishing a foundation for remaking the company with this new artificial intelligence. The research group became a symbol of the companys grandest ambitions.

Before she was fired, Dr. Gebru was seeking permission to publish a research paper about how A.I.-based language systems, including technology built by Google, may end up using the biased and hateful language they learn from text in books and on websites. Dr. Gebru said she had grown exasperated over Googles response to such complaints, including its refusal to publish the paper.

A few months later, the company fired the other head of the team, Margaret Mitchell, who publicly denounced Googles handling of the situation with Dr. Gebru. The company said Dr. Mitchell had violated its code of conduct.

The paper in Nature, published last June, promoted a technology called reinforcement learning, which the paper said could improve the design of computer chips. The technology was hailed as a breakthrough for artificial intelligence and a vast improvement to existing approaches to chip design. Google said it used this technique to develop its own chips for artificial intelligence computing.

Google had been working on applying the machine learning technique to chip design for years, and it published a similar paper a year earlier. Around that time, Google asked Dr. Chatterjee, who has a doctorate in computer science from the University of California, Berkeley, and had worked as a research scientist at Intel, to see if the approach could be sold or licensed to a chip design company, the people familiar with the matter said.

But Dr. Chatterjee expressed reservations in an internal email about some of the papers claims and questioned whether the technology had been rigorously tested, three of the people said.

While the debate about that research continued, Google pitched another paper to Nature. For the submission, Google made some adjustments to the earlier paper and removed the names of two authors, who had worked closely with Dr. Chatterjee and had also expressed concerns about the papers main claims, the people said.

When the newer paper was published, some Google researchers were surprised. They believed that it had not followed a publishing approval process that Jeff Dean, the companys senior vice president who oversees most of its A.I. efforts, said was necessary in the aftermath of Dr. Gebrus firing, the people said.

Google and one of the papers two lead authors, Anna Goldie, who wrote it with a fellow computer scientist, Azalia Mirhoseini, said the changes from the earlier paper did not require the full approval process. Google allowed Dr. Chatterjee and a handful of internal and external researchers to work on a paper that challenged some of its claims.

The team submitted the rebuttal paper to a so-called resolution committee for publication approval. Months later, the paper was rejected.

The researchers who worked on the rebuttal paper said they wanted to escalate the issue to Mr. Pichai and Alphabets board of directors. They argued that Googles decision to not publish the rebuttal violated its own A.I. principles, including upholding high standards of scientific excellence. Soon after, Dr. Chatterjee was informed that he was no longer an employee, the people said.

Ms. Goldie said that Dr. Chatterjee had asked to manage their project in 2019 and that they had declined. When he later criticized it, she said, he could not substantiate his complaints and ignored the evidence they presented in response.

Sat Chatterjee has waged a campaign of misinformation against me and Azalia for over two years now, Ms. Goldie said in a written statement.

She said the work had been peer-reviewed by Nature, one of the most prestigious scientific publications. And she added that Google had used their methods to build new chips and that these chips were currently used in Googles computer data centers.

Laurie M. Burgess, Dr. Chatterjees lawyer, said it was disappointing that certain authors of the Nature paper are trying to shut down scientific discussion by defaming and attacking Dr. Chatterjee for simply seeking scientific transparency. Ms. Burgess also questioned the leadership of Dr. Dean, who was one of 20 co-authors of the Nature paper.

Jeff Deans actions to repress the release of all relevant experimental data, not just data that supports his favored hypothesis, should be deeply troubling both to the scientific community and the broader community that consumes Google services and products, Ms. Burgess said.

Dr. Dean did not respond to a request for comment.

After the rebuttal paper was shared with academics and other experts outside Google, the controversy spread throughout the global community of researchers who specialize in chip design.

The chip maker Nvidia says it has used methods for chip design that are similar to Googles, but some experts are unsure what Googles research means for the larger tech industry.

If this is really working well, it would be a really great thing, said Jens Lienig, a professor at the Dresden University of Technology in Germany, referring to the A.I. technology described in Googles paper. But it is not clear if it is working.

Originally posted here:
Another Firing Among Googles A.I. Brain Trust, and More Discord - The New York Times

Read More..

Saving the lovable koala: How artificial intelligence from SAS is being used in the fight – WRAL TechWire

And the 2019-2020 bushfire season scorched millions of acres, killing 33 people and destroying thousands of homes. The fires also decimated wildlife, with an estimated 3 billion animals in the path of the flames.

Australiasiconic koala has seen a steep population drop and is now endangered. Among the causes? Climate-related weather events like fires and floods, as well as habitat destruction from development.

Technology drives rapid response and resilience

Attentis, an Australian technology firm, has designed and manufactured a range of intelligent sensors that provide local officials and emergency response teams with real-time information and monitoring. These sensors are powered byartificialintelligence (AI) and machine learning from SAS, the leader in analytics.

Our sensor networks help monitor, measure and mitigate many of the effects of climate change, from fire ignition to flooding to air quality, soil and environmental health, and much more, said Attentis Managing Director and founderCameron McKenna. Attentis multi-sensors are now equipped with AI-embedded SASIoT analyticsso that local officials, for the first time, can identify conditions and environmental factors such as fire ignitions and rapid water-level rise and respond immediately, while continuing to measure and monitor live environmental conditions to aid situational awareness.

Powering the worlds largest environmental-monitoring network

Attentis has created the worlds first integrated, high-speed sensor network throughoutAustraliasLatrobe Valley. Today, this network is the worlds largest real-time environmental-monitoring network.

Covering 913 square miles, the Latrobe Valley Information Network and its array of AI-powered sensors collects and delivers vital data that has improved local agriculture, utilities and forest industries, as well as emergency services.

Thousands of local and neighboring residents now access this data on a regular basis to monitor rainfall, air quality, fire starts, weather and more.

Collecting more real-time situational data via Attentis sensor networks and quickly uncovering key insights from that data using SAS Analytics for IoT means that local officials can make better, faster and more informed decisions that protect citizens, property and natural resources.

SAS and Attentis boost the resiliency of the people of Latrobe Valley in the face of fires, floods and other challenges brought about by climate change, said McKenna.

Protecting koalas and endangered species with AI

Historical data can also be used by government and academic researchers looking to protect endangered species like the koala. Understanding and monitoring threats to koalas such as bushfires and floods can help scientists assess the health of the population and develop strategies to sustain koala numbers.

SAS AI technologies are already used to protect other endangered species. See howWildTrack uses SAS Analyticsto protect cheetahs, rhinos and more.

Artificial Intelligence of Things

Advanced analytics like AI help harness value from theInternet of Things(IoT). Data management, cloud and high-performance computing techniques help manage and analyze the influx of IoT data from sensors like those built by Attentis. Insights from streaming analytics and AI underpin digital transformation efforts in a host of industries retail, manufacturing, energy, transportation, government and more that improve efficiency, convenience and security.

With fires and floods, every second matters. By combining Attentis intelligent sensors with our cloud-native SAS Analytics for IoT solution, were accelerating the speed and accuracy at which officials can respond to these environmental threats, saidJason Mann, Vice President of IoT at SAS. For example, with intelligent sensor networks and predictive analytics, emergency responders can now continuously and accurately assess river heights, rainfall and soil moisture in real-time. By closely monitoring and analyzing this data, these officials can quickly act on new insights and issue early flood warnings to people in high-risk areas who may be affected or inundated by severe weather.

Original post:
Saving the lovable koala: How artificial intelligence from SAS is being used in the fight - WRAL TechWire

Read More..

Artificial Intelligence In Insurtech Market to See Thriving Worldwide | Cognizant, Next IT Corp, Kasisto and more – Digital Journal

DLF Research added a research publication document on the Artificial Intelligence In Insurtech Market breaking major business segments and highlighting wider level geographies to get a deep-dive analysis of market data. The study is a perfect balance bridging both qualitative and quantitative information about the Artificial Intelligence In Insurtech market. The study provides valuable market size data for historical (Volume** & Value) from 2017 to 2021 which is estimated and forecasted till 2030*.

Some are the key & emerging players that are part of the coverage and have been profiled are Cognizant, Next IT Corp, Kasisto, Cape Analytics Inc., Microsoft, Google, Salesforce, Amazon Web Services, Lemonade, Lexalytics, H2O.ai.

Download Latest Artificial Intelligence In Insurtech Market Research Sample Copy Now @ https://www.datalabforecast.com/request-sample/388936-artificial-intelligence-in-insurtech-market

1. External Factor Analysis

An external analysis looks at the wider business environment that affects the business. This industry assessment covers all the factors that are outside the control. It includes both the micro and macro-environmental factors.

MACRO ENVIRONMENT: In-depth coverage of Factors such as governmental laws, social construct and cultural norms, environmental conditions, economics, and technology.

MICRO ENVIRONMENT: Factors highlighting the rivalry of the competition.

2. Growth & Margins

Players that are having a stellar growth track record are a must-see view in the study that Analysts have covered. From 2017 to 2020, some of the companies have shown enormous sales figures, with net income going doubled in that period with operating as well as gross margins constantly expanding. The rise of gross margins over the past few years directs strong pricing power of the competitive companies in the industry for its products or offer, over and above the increase in the cost of goods sold.

Check for more detail, Enquire about Latest Edition with Current Scenario Analysis @ https://www.datalabforecast.com/request-enquiry/388936-artificial-intelligence-in-insurtech-market

3. Ambitious growth plans & rising competition?

Industry players are planning to introduce new products launched into various markets around the globe considering applications/end use such as Automotive, Healthcare, Information Technology, Others. Examining some latest innovative products that are vital and may be introduced in EMEA markets in the last quarter of 2021. Considering the all-around development activities of companies, some players profiles are worth attention-seeking.

4. Where the Artificial Intelligence In Insurtech Industry is today

Though the latest year might not be that encouraging as market segments especially, Service, Product have shown modest gains, the growth scenario could have been changed if manufacturers would have planned an ambitious move earlier. Unlike past, but decent valuation and emerging investment cycle to progress in the Asia Pacific, North America, Europe, South America & The Middle East & Africa., many growth opportunities ahead for the companies in 2021, it looks descent today but stronger returns would be expected beyond.

Buy the full version of this research study @ https://www.datalabforecast.com/buy-now/?id=388936-artificial-intelligence-in-insurtech-market&license_type=su

Insights that Study is offering :

Market Revenue splits by most promising business segments. [By Type (Service, Product), By Application (Automotive, Healthcare, Information Technology, Others) and any other business Segment if applicable within the scope of the report]

Market Share & Sales Revenue by Key Players & Local Emerging Regional Players. [Some of the players covered in the study are Cognizant, Next IT Corp, Kasisto, Cape Analytics Inc., Microsoft, Google, Salesforce, Amazon Web Services, Lemonade, Lexalytics, H2O.ai]

A separate section on Entropy to gain useful insights on leaders aggressiveness towards the market [Merger & Acquisition / Recent Investment and Key Development Activity Including seed funding]

Competitive Analysis: Company profile of listed players with separate SWOT Analysis, Overview, Product/Services Specification, Headquarter, Downstream Buyers, and Upstream Suppliers.

Gap Analysis by Region. The country break-up will help you dig out Trends and opportunities lying in a specific territory of your business interest.

Thanks for reading the Global Artificial Intelligence In Insurtech Industry research publication; you can also get individual chapter-wise sections or region-wise report versions like America, LATAM, Europe, Nordic nations, Oceania, Southeast Asia, or Just Eastern Asia.

Contact:Henry KData Lab Forecast86 Van Wagenen Avenue, Jersey,New Jersey 07306, United StatesPhone: +1 917-725-5253Website: https://www.datalabforecast.com/Email: [emailprotected]Explore News Releases: https://newsbiz.datalabforecast.com/

Here is the original post:
Artificial Intelligence In Insurtech Market to See Thriving Worldwide | Cognizant, Next IT Corp, Kasisto and more - Digital Journal

Read More..

In the News: Dr. Manjeet Rege on Artificial Intelligence and Law Firms – Newsroom | University of St. Thomas – University of St. Thomas Newsroom

Professor and Director for the Center of Applied AI at St. Thomas Dr. Manjeet Rege wrote a column for the Star Tribune on how AI can be useful for lawyers.

From the column: One sector, however, where many have doubted that AI could make a difference is critical thinking by lawyers in the courtroom. There are many ways, though, that AI can help the legal process without replacing lawyers. A lawyer who prepares adequately has more chances of winning their case, and AI can help with research. AI eliminates the arduous task of legal analysis by searching and discovering case numbers and main arguments of cases in bulk documents.

In addition, intellectual property law has many possible AI applications.

For instance, a key component of intellectual property is due diligence. Finding critical patents in a portfolio is a cumbersome task in which automation can be handy. AI technologies mimic the behavior of expert searchers by comparing competitor and market information with existing patents. In 2016, the due diligence for the acquisition of ARM by SoftBank for $31 billion was completed in a matter of days using AI systems, as reported by Forbes.

View original post here:
In the News: Dr. Manjeet Rege on Artificial Intelligence and Law Firms - Newsroom | University of St. Thomas - University of St. Thomas Newsroom

Read More..

India, Germany agree to work together with focus on AI, startups – ETTelecom

India's Science and Technology and Earth Sciences Minister Jitendra Singh and German Education and Research Minister, Bettina Stark-Watzinger, during their meeting in Berlin, expressed satisfaction on the ongoing science and technology Cooperation between the two countries, which is one of the strategic pillars of the bilateral relationship.

"There is a lot of scope to work together in Artificial Intelligence, for which experts on two sides have already met. An Indo-German call for proposals for this would be raised soon inviting proposals from researchers and industry," officials said.

The two countries have already started mapping each other's strength in areas such as application of Artificial Intelligence in Sustainability and Healthcare.

Both Ministers felt delighted that several initiatives for human capacity developments in science and engineering have recently been worked out, which includes Women Involvement in Science and Engineering Research (WISER) to facilitate lateral entry of women researchers into ongoing S&T projects and Paired Early Career Fellowships (PECF) creating an inclusive ecosystem for the Indo-German S&T cooperation with exchange of young researchers on both sides.

Stark-Watzinger supported the idea to further strengthen bilateral scientific cooperation by partnering in emerging science and technology areas where both Germany and India have strength to work together and serve two societies.

Here is the original post:
India, Germany agree to work together with focus on AI, startups - ETTelecom

Read More..