Page 2,826«..1020..2,8252,8262,8272,828..2,8402,850..»

At last, a way to build artificial intelligence with business results in mind: ModelOps – ZDNet

How should IT leaders and professionals go about selecting and delivering the technology required to deliver the storied marvels of artificial intelligence and machine learning? AI and ML require having many moving parts in their right places, moving in the right direction, to deliver on the promise these technologies bring -- ecosystems, data, platforms, and last, but not least, people.

Is there a way for IT leaders to be proactive about AI and ML without ruffling and rattling an organization of people who want the miracles of AI and ML delivered tomorrow morning? The answer is yes.

The authors of a recent report from MIT Sloan Management Review and SAS advocates a relatively new methodology to successfully accomplish the delivery AI and ML to enterprises called "ModelOps." While there a lot of "xOps" now entering our lexicon, such as MLOps orAIOps, ModelOps is more "mindset than a specific set of tools or processes, focusing on effective operationalization of all types of AI and decision models."

That's because in AI and ML, models are the heart of the matter, the mechanisms that dictate the assembly of the algorithms, and assure continued business value. ModelOps, which is short for :model operationalization, "focuses on model life cycle and governance; intended to expedite the journey from development to deployment -- in this case, moving AI models from the data science lab to the IT organization as quickly and effectively as possible."

In terms of operationalizing AI and ML, "a lot falls back on IT," according to Iain Brown, head of data science for SAS, U.K. and Ireland, who is quoted in the report. "You have data scientists who are building great innovative things. But unless they can be deployed in the ecosystem or the infrastructure that exists -- and typically that involves IT - - there's no point in doing it. The data science community and AI teams should be working very closely with IT and the business, being the conduit to join the two so there's a clear idea and definition of the problem that's being faced, a clear route to production. Without that, you're going to have disjointed processes and issues with value generation."

ModelOps is a way to help IT leaders bridge that gap between analytics and production teams, making AI and ML-driven lifecycle "repeatable and sustainable," the MIT-SAS report states. It's a step above MLOps or AIOps, which "have a more narrow focus on machine learning and AI operationalization, respectively," ModelOps focuses on delivery and sustainability of predictive analytics models, which are the core of AI and ML's value to the business. ModelOps can make a difference, the report's authors continue, "because without it, your AI projects are much more likely to fail completely or take longer than you'd like to launch. Only about half of all models ever make it to production, and of those that do, about 90% take three months or longer to deploy."

Getting to ModelOps to manage AI and ML involves IT leaders and professionals pulling together four key elements of the business value equation, as outlined by the report's authors.

Ecosystems: These days, every successful technology endeavor requires connectivity and network power. "An AI-ready ecosystem should be as open as possible, the report states. "Such ecosystems don't just evolve naturally. Any company hoping to use an ecosystem successfully must develop next-generation integration architecture to support it and enforce open standards that can be easily adopted by external parties."

Data:Get to know what data is important to the effort. "Validate its availability for training and production. Tag and label data for future usage, even if you're not sure yet what that usage might be. Over time, you'll create an enterprise inventory that will help future projects run faster."

Platforms: Flexibility and modularity -- the ability to swap out pieces as circumstance change -- is key. The report's authors advocate buying over building, as many providers have already worked out the details in building and deploying AI and ML models. "Determine your cloud strategy. Will you go all in with one cloud service provider? Or will you use different CSPs for different initiatives? Or will you take a hybrid approach, with some workloads running on-premises and some with a CSP? : Some major CSPs typically offer more than just scalability and storage space, such as providing tools and libraries to help build algorithms and assisting with deploying models into production."

People: Collaboration is the key to successful AI and ML delivery, but it's also important that people have a sense of ownership over their parts of the projects."Who owns the AI software and hardware - the AI team or the IT team, or both? This is where you get organizational boundaries that need to be clearly defined, clearly understood, and coordinated." Along with data scientists, a group that is just as important to ModelOps is data engineers, who bring "significant expertise in using analytics and business intelligence tools, database software, and the SQL data language, as well as the ability to consistently produce clean, high-quality, ethical data."

Go here to see the original:
At last, a way to build artificial intelligence with business results in mind: ModelOps - ZDNet

Read More..

One Easy Way To Strengthen the Abraham Accords? Artificial Intelligence | Opinion – Newsweek

It has sometimes been said that the best way to predict the future is to create it. That, in essence, is precisely what Sens. Marco Rubio (R-FL) and Maria Cantwell (D-WA) have done in introducing the "United States-Israel Artificial Intelligence Act. The bill, submitted June 17, envisions the establishment of a joint artificial intelligence (AI) research facility by the United States government, and it will provide $50 million for it over the course of five years.

The bill's goal is to "leverage the experience, knowledge and expertise" of educational institutions and private businesses in the United States and Israel to pursue machine learning, image classification, object detection, speech recognition, natural language processing, data labeling, computer vision and model explainability and interpretability.

Sen. Rubio stressed that the United States, and indeed the world, "benefit immensely when we engage in joint cooperation and partnerships with Israel, a global technology leader and our most important ally in the Middle East," while also arguing that these bilateral research ties will help both nations stay ahead of China's ever-growing technology threat.

Why is AI so important? Essentially, the sheer volume of data generated on a daily basis eclipses our ability to digest it and then make appropriate decisions based on that data. AI is one of today's most important tools and assists us in keeping up the pace to make critical decisions. But the data inputs grow ever larger and appear ever faster. Thus, significant efforts need to be put into developing faster and better AI tools. The race to absorb, interpret, understand and make decisions on data, as well as our ability to simply keep up, may be never-ending. Sens. Rubio and Cantwell should thus be commended for their focus on this critical area.

Sens. Rubio and Cantwell should also be commended for recognizing Israel's technological prowess and for creating a partnership with that crucial American ally. This bill is a great example of smart governancerecognizing an important need, finding a smart way to address it and finding top-notch partners to help implement the idea. Smart, efficient and beneficial: That's how government should work. But here's an idea that could make the future this bill seeks to create even brighter, smarter, more efficient and beneficial on many levels: Incorporate the United Arab Emirates (UAE), as well.

The UAE has made significant strides in the area of AI since 2017, and it aims to be the world leader in AI by 2031. The UAE has the talent, drive and the funds to make this goal a reality. It has an AI ministry and has established goals of using AI in healthcare, aviation, education and other crucial areas. The UAE has proven itself to be a country that has big dreams and ambitions, and it succeeds in the realization of those dreams and ambitionsfrom developing massive, architecturally significant, beautiful cities out of the desert sands to successfully launching the Mars Hope Mission, the first unmanned interplanetary satellite the nation has spearheaded. Much can be learned from and gained by the United States and Israel if the UAE is folded into this meaningful partnership.

It is no secret that the Abraham Accords have very significant bipartisan support in the United States and have also attained significant worldwide support, including from many of our allies in the Middle East. Much has been said about encouraging more countries to sign onto the Abraham Accordswho is next to sign is still anyone's guessand to enhancing the already-established connections between the Abraham Accords signees.

Through the hard work of many, and through the courage and boldness of leaders in the Middle East, including Sheikh Mohamed bin Zayed bin Sultan Al Nahyan, the Abraham Accords have taken rootand the sapling is sprouting healthy branches and flourishing leaves. For the Abraham Accords to amply flower, continue to thrive and grow even stronger, we need to provide the right conditions for it to establish deeper roots in rich, fertile soil. We need to tend to the Abraham Accords and the countries who have partnered on it with care, devotion and commitment.

What better way to travel further down the noble, imperative Abraham Accords path than by bringing the UAE into the mix? The "United States-Israel-United Arab Emirates Artificial Intelligence Center Act" has a great ring to it. Including the UAE would be a win-win, all aroundfor the further development of AI, for the United States, for Israel, for the UAE and for stability and cooperation in the Middle East. And including the UAE would embrace another reliable ally of the United States, the overall results of which also allow us and our allies to stay ahead of the curve against competitors.

Aside from the many benefits the bill itself could achieve by adding the UAE, such addition may even entice others to join the Abraham Accords. Humanity benefits tremendously from that. Any takers?

Jason D. Greenblatt served as President Donald Trump's White House envoy to the Middle East for nearly three years. Follow him on Twitter: @GreenblattJD.

The views expressed in this article are the writer's own.

Follow this link:
One Easy Way To Strengthen the Abraham Accords? Artificial Intelligence | Opinion - Newsweek

Read More..

Hewlett Packard Enterprise Acquires Determined AI to Accelerate Artificial Intelligence Innovation with Fast and Simple Machine Learning Modeling -…

HOUSTON--(BUSINESS WIRE)--Hewlett Packard Enterprise (NYSE: HPE) today announced that it has acquired Determined AI, a San Francisco-based startup that delivers a powerful and robust software stack to train AI models faster, at any scale, using its open source machine learning (ML) platform.

HPE will combine Determined AIs unique software solution with its world-leading AI and high performance computing (HPC) offerings to enable ML engineers to easily implement and train machine learning models to provide faster and more accurate insights from their data in almost every industry.

As we enter the Age of Insight, our customers recognize the need to add machine learning to deliver better and faster answers from their data, said Justin Hotard, senior vice president and general manager, HPC and Mission Critical Solutions (MCS), HPE. AI-powered technologies will play an increasingly critical role in turning data into readily available, actionable information to fuel this new era. Determined AIs unique open source platform allows ML engineers to build models faster and deliver business value sooner without having to worry about the underlying infrastructure. I am pleased to welcome the world-class Determined AI team, who share our vision to make AI more accessible for our customers and users, into the HPE family.

Determined AI accelerates innovation with open source AI solutions to build and train models faster and easier

Building and training optimized machine learning models at scale is considered the most demanding and critical stage of ML development, and doing it well increasingly requires researchers and scientists to face many challenges frequently found in HPC. These include properly setting up and managing a highly parallel software ecosystem and infrastructure spanning specialized compute, storage, fabric and accelerators. Additionally, users need to program, schedule and train their models efficiently to maximize the utilization of the highly specialized infrastructure they have set up, creating complexity and slowing down productivity.

Determined AIs open source machine learning training platform closes this gap to help researchers and scientists to focus on innovation and accelerate their time to delivery by removing the complexity and cost associated with machine learning development. This includes making it easy to set-up, configure, manage and share workstations or AI clusters that run on-premises or in the cloud.

Determined AI also makes it easier and faster for users to train their models through a range of capabilities that significantly speed up training, which in one use case related to drug discovery, went from three days to three hours. These capabilities include accelerator scheduling, fault tolerance, high speed parallel and distributed training of models, advanced hyperparameter optimization and neural architecture search, reproducible collaboration and metrics tracking.

The Determined AI team is excited to join HPE, who shares our vision to realize the potential of AI, said Evan Sparks, CEO of Determined AI. Over the last several years, building AI applications has become extremely compute, data, and communication intensive. By combining with HPEs industry-leading HPC and AI solutions, we can accelerate our mission to build cutting edge AI applications and significantly expand our customer reach.

HPC the foundation for delivering speed-to-insight and AI at scale

AI training is continuing to fuel projects and innovation with intelligence, and to do so effectively, and at scale, will require specialized computing. According to IDC, the accelerated AI server market, which plays an integral role in providing targeted capabilities for image and data-intensive training, is expected to grow by 38% each year and reach $18B by 2024.

The massive computing power of HPC is also increasingly being used to train and optimize AI models, in addition to combining with AI to augment workloads such as modeling and simulation, which are well-established tools to speed time-to-discovery. Intersect360 Research notes that the HPC market will grow by more than 40%, reaching almost $55 billion in revenue by 2024.

To tackle the growing complexity of AI with faster time-to-market, HPE is committed to continue delivering advanced and diverse HPC solutions to train machine learning models and optimize applications for any AI need, in any environment. By combining Determined AIs open source capabilities, HPE is furthering its mission in making AI heterogeneous and empowering ML engineers to build AI models at a greater scale.

Additionally, through HPE GreenLake cloud services for High Performance Computing (HPC), HPE is making HPC and AI solutions even more accessible and affordable to the commercial market with fully managed services that can run in a customers data center, in a colocation or at the edge using the HPE GreenLake edge to cloud platform.

The Determined AI team will join HPEs High Performance Computing (HPC) & Mission Critical Solutions (MCS) business group

Determined AI was founded in 2017 by Neil Conway, Evan Sparks, and Ameet Talwalkar, and based in San Francisco. It launched its open-source platform in 2020, and as a result of its focus on model training, Determined AI has quickly emerged as a leading player in the evolving machine learning software ecosystem. Its solution has been adopted by customers across a wide range of industries, such as biopharmaceuticals, autonomous vehicles, defense contracting, and manufacturing.

Additional resources

About Determined AI

Determined AI is an early stage company at the forefront of machine learning technology by helping customers reap benefits of high-performance computing without the required expertise or staffing. Determined AI provides an open source machine learning solution that speeds up time-to-market by increasing developer productivity, improving resource utilization and reducing risk. The company is headquartered in San Francisco. For more information, visit: http://www.determined.ai

About Hewlett Packard Enterprise

Hewlett Packard Enterprise (NYSE: HPE) is the global edge-to-cloud company that helps organizations accelerate outcomes by unlocking value from all of their data, everywhere. Built on decades of reimagining the future and innovating to advance the way people live and work, HPE delivers unique, open and intelligent technology solutions delivered as a service spanning Compute, Storage, Software, Intelligent Edge, High Performance Computing and Mission Critical Solutions with a consistent experience across all clouds and edges, designed to help customers develop new business models, engage in new ways, and increase operational performance. For more information, visit: http://www.hpe.com

Continued here:
Hewlett Packard Enterprise Acquires Determined AI to Accelerate Artificial Intelligence Innovation with Fast and Simple Machine Learning Modeling -...

Read More..

Artificial Intelligence and the Future of Engineering – Analytics Insight

Back then engineering was all about blueprints, sketches, and physical models. But today it is intensively about software tools and computer designs. The demand for artificial intelligence and digital technology has been gaining momentum. Advancements in the AI sector are transforming smart systems and supervised machine learning to a great extent.

Artificial intelligence systems will ease the laborious tasks that engineers do such as finding relevant content, fixing errors, and determining solutions. Smart systems can help finish their job quickly. AI and digital tech can also assist the system engineer in creating sophisticated designs along with incorporating sensor-based design procedures and delivering the designs to intelligent manufacturing facilities.

But AI may not approach the project the way a human designer would. Sometimes this can go off the grid too. Because todays machines are usually made of expert systems that have software-enabled decision-making. Since the engine of this software is based on if-then rules, it is acquired through experience.

When new knowledge is registered in the library, the software uses if-then rules to expand new facts or ifs and comes out suggesting various solutions or then what occurs.

This process serves as the basis of AI and machine learning. As people are connected with the help of the internet, this opens the doors for smart machines to bring in new services and opportunities.

The largest fraction of smart systems is currently made of expert systems and this will be soon taken over by autonomous robots through technology transformation by 2024. There are critics as well as supporters for this trend. Since the number of robotic appliances continues to increase automatically, the cost of the sensors will decrease this becomes the simple reason for artificial intelligence robots to multiply within no time.

The robotic sensors market is estimated to undergo a CAGR of nearly 8% over the forecast period of (2021-2026). Currently, most of the industries like automotive, transportation, industrial manufacturing, logistics, and defense have started to adopt autonomous robotics and digital technology as their main mode of the production process.

As a result, due to this rapid growth of smart technologies with its roots, interconnected artificial intelligence can lead to uncertainty. Though computers can take intelligently-based action, they are not capable of replicating the cognition processes of the human brain. Algorithms of artificial intelligence tech can only deal with known data and cannot predict and formulate rational decisions during uncertain situations.

New technologies using the most advanced AI and machine learning have been coming up due to their widespread connectivity and inexpensive sensors. These technologies are primitive and still are not capable of mimicking the human brain.

It becomes clear that AI algorithms relate facts to solutions that are dependent on experiential learning without any acknowledgment of physics. AI has evolved from scientific advancement to an engineering tool. Latest innovations in digital tech require engineers from various domains to learn and integrate AI tools into their engineering designs.

Many open-source tools such as Microsofts DMLT, Googles TensorFlow, and Amazons DSSTNE possess software libraries that empower machine learning. The DeepVariant open AI software of Google can depict a persons genome from sequencing data more accurately than other methods are helping engineers to seek help from.

Personal productivity assistants like Amazons Alexa, Apples Siri, and Windows Cortana use natural language processing to make decisions. IBM Watson has been trained by Oncologists to help them treat and diagnose lung cancer. Tesla is getting closer to self-driving autonomous vehicles. Zebra Medical Systems, an Israeli company is developing tools for radiology having greater than human accuracy. All this is possible with the help of different types of engineers who are responsible for training smart systems.

At this point in time, the role of a human engineer may be that of a director shortly rather than producing and manufacturing the products. Though humans may not be executing the task, they are definitely the ones who are choosing the direction the machine should work. Once the machine knows how to design things, the system of engineering will change but engineers will still be highly skilled and relevant.

The uncertain future of technologies demands resilient and versatile engineers who can design robust technologies using artificial intelligence with different skill sets, including teaching AI systems how to innovate and become part of future human-AI organizations.

Share This ArticleDo the sharing thingy

Go here to read the rest:
Artificial Intelligence and the Future of Engineering - Analytics Insight

Read More..

Fusion of Artificial Intelligence and Nanopore Technology: Passing the COVID Test in Just Five Minutes – SciTechDaily

Operating principle of artificial intelligence nanopore for coronavirus detection. Credit: Osaka UniversityOsaka University

Researchers at Osaka University develop a new highly sensitive test for the SARS-CoV-2 virus that utilizes a fusion of artificial intelligence and nanopore technology which may enable rapid point-of-care testing for COVID.

A team of scientists headed by SANKEN (The Institute of Scientific and Industrial Research) at Osaka University demonstrated that single virus particles passing through a nanopore could be accurately identified using machine learning. The test platform they created was so sensitive that the coronaviruses responsible for the common cold, SARS, MERS, and COVID could be distinguished from each other. This work may lead to rapid, portable, and accurate screening tests for COVID and other viral diseases.

The global coronavirus pandemic has revealed the crucial need for rapid pathogen screening. However, the current gold-standard for detecting RNA viruses including SARS-CoV-2, the virus that causes COVID is reverse transcription-polymerase chain reaction (RT-PCR) testing. While accurate, this method is relatively slow, which hinders the timely interventions required to control an outbreak.

Now, scientists led by Osaka University have developed an intelligent nanopore system that can be used for the detection of SARS-CoV-2 virus particles. Using machine-learning methods, the platform can accurately discriminate between similarly sized coronaviruses responsible for different respiratory diseases. Our innovative technology has high sensitivity and can even electrically identify single virus particles, first author Professor Masateru Taniguchi says. Using this platform, the researchers were able to achieve a sensitivity of 90% and a specificity of 96% for SARS-CoV-2 detection in just five minutes using clinical saliva samples.

To fabricate the device, nanopores just 300 nanometers in diameter were bored into a silicon nitride membrane. When a virus was pulled through a nanopore by the electrophoretic force, the opening became partially blocked. This temporarily decreased the ionic flow inside the nanopore, which was detected as a change in the electrical current. The current as a function of time provided information on the volume, structure, and surface charge of the target being analyzed. However, to interpret the subtle signals, which could be as small as a few nanoamps, machine learning was needed. The team used 40 PCR-positive and 40 PCR-negative saliva samples to train the algorithm.

We expect that this research will enable rapid point-of-care and screening tests for SARS-CoV-2 without the need for RNA extraction, Professor Masateru Taniguchi explains. A user-friendly and non-invasive method such as this is more amenable to immediate diagnosis in hospitals and screening in places where large crowds are gathered. The complete test platform consists of machine learning software on a server, a portable high-precision current measuring instrument, and cost-effective semiconducting nanopore modules. By using a machine-learning method, the researchers expect that this system can be adapted for use in the detection of emerging infectious diseases in the future. The team hopes that this approach will revolutionize public health and disease control.

Reference: Combining machine learning and nanopore construction creates an artificial intelligence nanopore for coronavirus detection by Masateru Taniguchi, Shohei Minami, Chikako Ono, Rina Hamajima, Ayumi Morimura, Shigeto Hamaguchi, Yukihiro Akeda, Yuta Kanai, Takeshi Kobayashi, Wataru Kamitani, Yutaka Terada, Koichiro Suzuki, Nobuaki Hatori, Yoshiaki Yamagishi, Nobuei Washizu, Hiroyasu Takei, Osamu Sakamoto, Norihiko Naono, Kenji Tatematsu, Takashi Washio, Yoshiharu Matsuura and Kazunori Tomono, 17 June 2021, Nature Communications.DOI: 10.1038/s41467-021-24001-2

Go here to see the original:
Fusion of Artificial Intelligence and Nanopore Technology: Passing the COVID Test in Just Five Minutes - SciTechDaily

Read More..

Missing figures of Rembrandt`s `Night Watch` restored by artificial intelligence – WION

Rembrandt's famed "The Night Watch" is back on display for the first time in 300 years,with missing parts temporarily restored in an exhibition aided by artificial intelligence.

Rembrandtfinished the large canvas, which portrays the captain of an Amsterdam city militia ordering his men into action, in 1642.

Although it is now considered one of the greatest masterpieces of the Dutch Golden Age, strips were cut from all four sides of it during a move in 1715.

Also see|Worlds most famous stolen paintings that have never been found!

Though those strips have not been found, another artist of the time had made a copy, and restorers and computer scientists have used that, blended withRembrandt's style, to recreate the missing parts.

"It's never the real thing, but I think it gives you different insight into the composition," Rijksmuseum director Taco Dibbits said.

The effect is a little like seeing a photo cropped as the photographer would have wanted.

The central figure in the painting, Captain Frans Bannink Cocq, now appears more off-centre, as he was inRembrandt's original version, making the work more dynamic.

Some of the figure of a drummer entering the frame on the far right has been restored, as he marches onto the scene, prompting a dog to bark.

Three restored figures that had been missing on the left, not highly detailed, are onlookers, not members of the militia. That was an effectRembrandtintended, Dibbits said, to draw the viewer into the painting.

Rijksmuseum Senior Scientist Robert Erdmann explained some of the steps in crafting the missing parts, which are hung to overlap the original work without touching it.

The museum always knew the original, uncut, painting was bigger, in part thanks to a far smaller copy painted at the same time that is attributed to Gerrit Lundens.

Researchers and restorers who have painstakingly pored over the work for nearly two years using a battery of high tech scanners, X-rays and digital photography combined the vast amount of data they generated with the Lundens copy to recreate and print the missing strips.

We made an incredibly detailed photo of the Night Watch and through artificial intelligence or what they call a neural network, we taught the computer what color Rembrandt used in the Night Watch, which colors, what his brush strokes looked like, Dibbits said.

The machine learning also enabled the museum to remove distortions in perspective that are present in the Lundens copy because the artist was sitting at one corner while he painted Rembrandts painting.

The reason the 1642 group portrait of an Amsterdam civil militia was trimmed is simple: It was moved from the militias club house to the town hall and there it didnt fit on a wall between two doors. A bit of very analog cropping with a pair of scissors ensued and the painting took on the dimensions that have now been known for centuries. The fate of the pieces of canvas that were trimmed off remains a mystery.

Under relaxations of the Dutch COVID-19 lockdown, the museum can welcome more visitors from this weekend, but still only about half of its normal capacity.

During the restoration project, the painting was encased in a specially designed glass room and studied in unprecedented detail from canvas to the final layer of varnish.

Among that mound of data, researchers created the most detailed photograph ever made of the painting by combining 528 digital exposures.

The 1642 painting last underwent significant restoration more than 40 years ago after it was slashed by a knife-wielding man and is starting to show blanching in parts of the canvas.

Dibbits said the new printed additions are not intended to trick visitors into thinking the painting is bigger, but to give them a clear idea of what it was supposed to look like.

Rembrandt would have definitely done it more beautifully, but this comes very close, he said.

(With inputs from agencies)

See original here:
Missing figures of Rembrandt`s `Night Watch` restored by artificial intelligence - WION

Read More..

Pentagon leaders eye new policy on responsible artificial intelligence (AI) that is reliable and governable – Intelligent Aerospace

WASHINGTON - The Pentagon's Joint Artificial Intelligence Center (JAIC) will lead implementation of responsible artificial intelligence (AI) across the U.S. Department of Defense (DOD), according to a new directive, Mila Jasper reports for Defense One.Continue reading original article.

The Intelligent Aerospace take:

June 21, 2021 -U.S. Deputy Defense Secretary Kathleen Hicks has enumerated foundational tenets for responsible AI, reaffirmed the ethical AI principles the department adopted last year, and mandated the JAIC director start work on four activities for developing a responsible AI ecosystem. The five ethical AI principles are responsible, equitable, traceable, reliable, and governable.

The guidance requires disciplined governance, warfighter trust, a systems-engineering and risk-management approach to AI acquisitions, incorporation of responsible AI principles in requirements, a robust national and global responsible AI ecosystem, and a workforce educated on responsible AI.

According to a January 2021 JAIC blog post, the center has already begun work around responsible AI. The JAIC already launched a DOD-wide Responsible AI Subcommittee in March 2020, the post notes, and this diverse group of approximately 50 individuals representing all major components of the DOD has been meeting monthly throughout the year on a variety of efforts associated with policy and governance processes.

Related: Artificial intelligence and machine learning for unmanned vehicles

Related: Pentagon to spend $874 million on artificial intelligence (AI) and machine learning technologies next year

Related: Researchers ask industry for military technologies in artificial intelligence (AI) and unmanned aircraft

Jamie Whitney, Associate EditorIntelligent Aerospace

Go here to see the original:
Pentagon leaders eye new policy on responsible artificial intelligence (AI) that is reliable and governable - Intelligent Aerospace

Read More..

Artificial Intelligence is Revolutionizing This Contractor’s Construction Scheduling and Risk Management – ForConstructionPros.com

Digital engineering and construction manager Project Controls Cubed is applying artificial intelligence directly to the construction process at mega-projects the contractor is serving to slash scheduling labor and manage schedule risk through InEights Schedule software.

Project Controls Cubed offers 4D modeling, virtual and augmented reality services, digital-twin technology, virtual planning, scheduling and cost control. The company is contracted for planning, scheduling and cost control on four billion-dollar-plus West Coast water projects the EchoWater program, Pure Water program and Sites program in California, and the Bull Run Treatment program in Oregon.

We understand the power that is behind any schedule, and that's why we have deployed it (InEight Schedule), we have faith that it is the best solution out there to manage the planning, scheduling and cost control and it's predictability certainly has paid off, says Jeff Campbell, director of planning and scheduling at Project Controls Cubed. The secret sauce to the program has been AI, which InEight named Iris. IRIS works in the background and starts to predict what our project contributors are going to do with the schedule and the outcome is outstanding. The key role in what we do is provide situational awareness to our clients and that situational awareness provides actionable intelligence, which allows the leaders of these big programs to make decisions with confidence.

So there's this idea of what we call a Knowledge Library that gets populated when you archive or as-build a schedule, a cost estimate or anything. And, it's friendly enough where you can use P6 XER files, Microsoft Project files, Excel files, it's what we call unstructured data, says Nate St. John, head of product, scheduling and risk at InEight. So leveraging what we call an inference engine, it will mine this unstructured data and identify matches for recommendations when a planner begins to preplan a new project. The machine-learning element we've built into the software continuously suggests and learns, suggests and learns, and it's a cyclical process. If Jeff or someone on his team declines a suggested match, the computer will remember that, learn it, and change the algorithm so that the next time Jeff and his team encounter a scenario with similar attributes, there'll be a more accurate suggestion from the inference engine. And it all falls under this AI definition.

One big-picture benefit of software that can learn how you think is the way it preserves your experience. Campbell mentions a huge, advanced wastewater treatment program called EchoWater that PC3 has been serving since 2012.

We've created and constructed some incredible facilities there. And the people that we work with are probably the most brilliant I've ever seen not just the superintendent, but the program manager and all of his senior people, the project and program managers. EchoWater (construction) is coming to an end in 2022-2023, and what happens to all of those incredibly intelligent people? Well, they probably are going to retire a lot of them already have.

What we've done is captured all of the information (generated over the course of construction) into the Iris AI. So the next time we build an advanced wastewater treatment program, we can use that knowledge library to create schedule modules that have how much it actually costs to build each component and the actual time all the actual start and finish dates of all the activities that can go into it.

On a program that big we're talking 100,000 activities, Campbell says. Even with 20 years of experience, I wouldn't know what to do with 100,000 activities. Well, Iris does. Iris can keep those as schedule modules. So when we propose a different program, were not going to create a whole new schedule. We just go into Iris and say, Hey, we're creating a new filtration facility. And Iris will tell us, Well, you've already made a filtration facility before. Here's what it actually took in time and money to create that. And all of a sudden, we have a schedule that would have taken me years to develop. So when we go into proposing a new program, we bring that knowledge to it.

Artificial intelligence also integrates project managers ongoing knowledge of job progress with monthly reviews in InEight Schedule to automate risk scheduling. What Campbell calls non-scheduling people, such as program managers and project managers, get an email listing activities for which they are responsible that were active in the just-finished period. The managers update status of those activities in InEight.

Jeff describes it as a check on the pulse of project progress, Theyre saying, How am I doing with the activities I'm in charge of, according to my deliverables? So they take it really seriously.

InEight Schedule allows them to update the schedule, and at the same time, introduce risks. Those risks are kind of like an off-ramp to say, Guess what team, there may be an issue with us making this deliverable, which could impact us getting funding. But it's not necessarily that I am the cause of that. I am introducing a risk that may be affecting that.

In the past, that would go to a risk manager someone who's like a 40-hour a week professional who uses some really expensive software to develop a risk-adjusted schedule that's actually not a schedule. It's just kind of a Monte Carlo simulation. We don't need to have a six-figure-income professional risk person on our programs, because Iris and InEight Schedule creates risk-adjusted schedules automatically.

At Schedule Review, project managers introduce risks to their deliverables and rate their confidence in performance to feed the AI's ability to automatically generate risk-adjusted schedules and budgets.InEight SoftwareProject managers introduce risks at project reviews and assign them probabilities chance of their occurring as a percentage. Campbells scheduling team reviews their updates, assessing risks, and decide whether or not to have Iris create an InEight risk-adjusted schedule. If one is warranted, program and project managers immediately see the risk-adjusted schedule.

Right away, theyre starting the mitigation planning, which would have taken weeks or months to do on the other side. So that is another brilliant feature that has really helped us in our programs.

InEight recently released functionality that allows the cost estimate and schedule to use a unified risk register. The user benefits from work construction planners do up front in developing a risk matrix, import it into InEight Schedule, and now offer a unified point to access the impact of a risk occurring not only measured in days, but also in dollars.

Iris also has a feature called Schedule Critique that measures schedule quality.

Were talking about things like constraints, missing logic, missing predecessors, successors, all sorts of great things that we need to make sure are included when we produce the schedule. If you do not have a high-quality schedule, then what you're showing your stakeholders is not very good information, says Campbell. InEight Schedule allows us to be better planners and schedulers because it will automatically identify as you're building the schedule, Hey, you're missing predecessors. Hey, you have a constraint. Hey, there's a gap in your critical path. It very nicely and intelligently tells you when you're not being a very good planner/scheduler and tells you how to fix it. InEight Schedule keeps us to our high standards.

Experience using it has convinced Campbell that AI is a lynchpin in the chain of events that will take construction performance up to 21st Century standards of business reliability and productivity.

You have to think about the project superintendent and program manager. They have all the experience, but eventually those great folks are going to retire. Iris and InEight Schedule captures all of their knowledge, and that is how we are going to evolve as an industry.

Right now, every single time we have a new project, we start over from scratch, right? But now, the more we learn from past programs and projects, the more we build that Knowledge Library and can apply that knowledge to our new programs and projects, the better we're going to be. We're going to have actual cost data, actual start and actual finish dates. So we will know exactly what it took to build something instead of what it might take to build something in the future.

Visit link:
Artificial Intelligence is Revolutionizing This Contractor's Construction Scheduling and Risk Management - ForConstructionPros.com

Read More..

Spotlight on AI: Latest Developments in the Field of Artificial Intelligence – Analytics Insight

Whats new in the world of artificial intelligence?

Artificial intelligence is changing the course of our lives with its constant developments. Before the pandemic and now in the new normal, AI remains to be a key trend in the tech industry. It is reaching wider audiences as years pass and scientists, engineers, and entrepreneurs who involve themselves with modern technologies are reaping the benefits of AI and its branches, IoT and machine learning.

Organizations that overlooked digital transformation and the power of artificial intelligence are picking the pace of AI adoption. When COVID-19 was creating chaos across industries, it became evident that disruptive technologies and the automation that comes with it are more than crucial.

While 2020 was a great year for artificial intelligence working with its true potential, here are the latest advancements in the field of AI that are promising exciting times for the future of this technology.

Researchers from the University of Gothenburg have found an artificial intelligence model to predict what kind of virus can possibly spread from animals to humans. Using artificial intelligence, the algorithm studies the role of carbohydrates to understand the infection path. In scientific terms, carbohydrates are called glycans and they play a significant role in the way our bodies function. Almost all viruses first affect the glycans in our bodies, so did the coronavirus. Led by Daniel Bojar, assistant professor at the University of Gothenburg, the new AI model can analyze glycans with improved accuracy. The model analyses the infection process by predicting new virus-to-glycan interactions to better understand zoonotic diseases.

The world is evolving with disruptive technologies and that includes hackers and cyber attackers. Cyberattacks have become more common amidst the remote working culture, where sensitive files and documents have become the prime targets. To deal with this pressing concern, V.S.Subrahmanian, a cybersecurity researcher at Dartmouth College, created an algorithm called Word Embedding-based Fake Online Repository Generation Engine (WE-FORGE) that generates fake patents that are under development. This makes it difficult for hackers to find what they are looking for. The system generates convincing fakes based on the keywords of a given document. For each keyword it identifies, it analyses a list of related topics and replaces the original keyword with one randomly chosen related word.

DataRobot announced its second major platform launch, DataRobot version 7.1 with new MLOps management agents, time series model enhancements, and automated AI reports. With an aim to provide lifecycle management for remote AI and machine learning models, DataRobots new launch will offer feature discovery push-down integration for Snowflake and time series Eureqa model improvements. Through this, Snowflake users can use automatic discovery and computation of individual independent variables in the Snowflake data cloud. Apart from these additions, DataRobot also provides a no-code app builder that has the ability to convert deployed models into AI apps with no coding.

Exscientias US$60M acquisition of Allcyte will boost AI-drug discovery. Allcyte is an Austrian company that is developing an artificial intelligence platform to study how cancer treatments work on different individuals. Post the acquisition, this technology will work with Exscientias native software that uses AI to identify potential drug targets, build the right drugs, and send them for trials. Exscientia will now be able to work with a precision medicine approach to design drug molecules, ensuring improved efficiency.

The creator of the SaaS delivery model for financial services billing solutions, Redi2 Technologies, has announced a collaboration with IBM Private Cloud Services to improve flexibility. The combination of these technologies will add strong value for Redi2 Revenue Manager clients. Top asset managers throughout the world can take advantage of the improvements like fast reaction options for clients who need quick responses to changes, move data from one country to another or expand their infrastructure requirements.

Share This ArticleDo the sharing thingy

The rest is here:
Spotlight on AI: Latest Developments in the Field of Artificial Intelligence - Analytics Insight

Read More..

Fake news generated by artificial intelligence can be convincing enough to trick even experts – Scroll.in

If you use such social media websites as Facebook and Twitter, you may have come across posts flagged with warnings about misinformation. So far, most misinformation flagged and unflagged has been aimed at the general public. Imagine the possibility of misinformation information that is false or misleading in scientific and technical fields like cybersecurity, public safety and medicine.

There is growing concern about misinformation spreading in these critical fields as a result of common biases and practices in publishing scientific literature, even in peer-reviewed research papers. As a graduate student and as faculty members doing research in cybersecurity, we studied a new avenue of misinformation in the scientific community. We found that it is possible for artificial intelligence systems to generate false information in critical fields like medicine and defence that is convincing enough to fool experts.

General misinformation often aims to tarnish the reputation of companies or public figures. Misinformation within communities of expertise has the potential for scary outcomes such as delivering incorrect medical advice to doctors and patients. This could put lives at risk.

To test this threat, we studied the impacts of spreading misinformation in the cybersecurity and medical communities. We used artificial intelligence models dubbed transformers to generate false cybersecurity news and Covid-19 medical studies and presented the cybersecurity misinformation to cybersecurity experts for testing. We found that transformer-generated misinformation was able to fool cybersecurity experts.

Much of the technology used to identify and manage misinformation is powered by artificial intelligence. AI allows computer scientists to fact-check large amounts of misinformation quickly, given that there is too much for people to detect without the help of technology. Although AI helps people detect misinformation, it has ironically also been used to produce misinformation in recent years.

Transformers, like BERT from Google and GPT from OpenAI, use natural language processing to understand text and produce translations, summaries and interpretations. They have been used in such tasks as storytelling and answering questions, pushing the boundaries of machines displaying human-like capabilities in generating text.

Transformers have aided Google and other technology companies by improving their search engines and have helped the general public in combating such common problems as battling writers block.

Transformers can also be used for malevolent purposes. Social networks like Facebook and Twitter have already faced the challenges of AI-generated fake news across platforms.

Our research shows that transformers also pose a misinformation threat in medicine and cybersecurity. To illustrate how serious this is, we fine-tuned the GPT-2 transformer model on open online sources discussing cybersecurity vulnerabilities and attack information. A cybersecurity vulnerability is the weakness of a computer system, and a cybersecurity attack is an act that exploits a vulnerability. For example, if a vulnerability is a weak Facebook password, an attack exploiting it would be a hacker figuring out your password and breaking into your account.

We then seeded the model with the sentence or phrase of an actual cyberthreat intelligence sample and had it generate the rest of the threat description. We presented this generated description to cyberthreat hunters, who sift through lots of information about cybersecurity threats. These professionals read the threat descriptions to identify potential attacks and adjust the defences of their systems.

We were surprised by the results. The cybersecurity misinformation examples we generated were able to fool cyberthreat hunters, who are knowledgeable about all kinds of cybersecurity attacks and vulnerabilities. Imagine this scenario with a crucial piece of cyberthreat intelligence that involves the airline industry, which we generated in our study.

This misleading piece of information contains incorrect information concerning cyberattacks on airlines with sensitive real-time flight data. This false information could keep cyber analysts from addressing legitimate vulnerabilities in their systems by shifting their attention to fake software bugs. If a cyber analyst acts on the fake information in a real-world scenario, the airline in question could have faced a serious attack that exploits a real, unaddressed vulnerability.

A similar transformer-based model can generate information in the medical domain and potentially fool medical experts. During the Covid-19 pandemic, preprints of research papers that have not yet undergone a rigorous review are constantly being uploaded to such sites as medrXiv.

They are not only being described in the press but are being used to make public health decisions. Consider the following, which is not real but generated by our model after minimal fine-tuning of the default GPT-2 on some Covid-19-related papers.

The model was able to generate complete sentences and form an abstract allegedly describing the side effects of Covid-19 vaccinations and the experiments that were conducted. This is troubling both for medical researchers, who consistently rely on accurate information to make informed decisions and for members of the general public, who often rely on public news to learn about critical health information. If accepted as accurate, this kind of misinformation could put lives at risk by misdirecting the efforts of scientists conducting biomedical research.

Although examples like these from our study can be fact-checked, transformer-generated misinformation hinders such industries as health care and cybersecurity in adopting AI to help with information overload. For example, automated systems are being developed to extract data from cyberthreat intelligence that is then used to inform and train automated systems to recognise possible attacks. If these automated systems process such false cybersecurity text, they will be less effective at detecting true threats.

We believe the result could be an arms race as people spreading misinformation develop better ways to create false information in response to effective ways to recognise it.

Cybersecurity researchers continuously study ways to detect misinformation in different domains. Understanding how to automatically generate misinformation helps in understanding how to recognise it. For example, automatically generated information often has subtle grammatical mistakes that systems can be trained to detect. Systems can also cross-correlate information from multiple sources and identify claims lacking substantial support from other sources.

Ultimately, everyone should be more vigilant about what information is trustworthy and be aware that hackers exploit peoples credulity, especially if the information is not from reputable news sources or published scientific work.

Priyanka Ranade is a PhD Student in Computer Science and Electrical Engineering and Anupam Joshi is a Professor of Computer Science & Electrical Engineering at the University of Maryland, Baltimore County.

Tim Finin is a Professor of Computer Science and Electrical Engineering at the same institute.

This article first appeared on The Conversation.

More here:
Fake news generated by artificial intelligence can be convincing enough to trick even experts - Scroll.in

Read More..