Category Archives: Machine Learning

Star Trek Creator’s Foundation Offers $1M Prize for AI Promoting Good – AI Business

The foundation honoring the late creator of Star Trek has announced a $1 million prize for startups using AI to shape a brighter future.

Gene Roddenberry's vision for humanity as seen in Star Trek was one where humanity coexisted peacefully, putting aside material wants in pursuit of scientific curiosity and bettering oneself.

This years Roddenberry Prize contest focuses on AI and machine learning in the hopes of unearthing solutions that align with the vision of the shows creator.

Judges are looking for AI and machine learning technologies with real-world impact capable of scaling to support billions of people.

Proposed solutions must be respectful of individual rights, be designed to avoid biases and support at least one of the United Nations 17 Sustainable Development Goals.

The foundation, established by Roddenberrys family following his death in 1991, said the competition affirms Roddenberrys confidence in humanitys wisdom and creativity to build a better future.

As AI becomes more powerful and ubiquitous, we call for its use in service of a more equitable and prosperous world in which all of us, regardless of our background, can thrive, according to the foundation.

The contest will feature three rounds: an exploratory first round, followed by a deeper dive with a select group in the second round. The final round will include hour-long meetings with five startups in October and November.

Related:Future AI Could Share Knowledge Like the Borg in Star Trek

The application deadline is July 12.

The competition is open to global startups that have raised seed funding and not exceeded series A. Nonprofit entries must have an annual budget of less than $5 million.

As we enter the AI era, we find ourselves at a critical juncture in human history, poised on the brink of profound technological transformation, according to the foundation. The rapid advancement of AI promises to revolutionize virtually every aspect of society, from the way we work and communicate to how we navigate the complexities of the modern world.

AI was greatly interwoven in the lore of Roddenberrys Star Trek universe, from the interactive computer systems found on Starships to the Emergency Medical Holograms capable of treating patients on Voyager.

One AI-related story still fascinates scholars to this day: The Next Generation episode The Measure of a Man, in which a legal hearing is called to determine whether android crewmember Data was sentient or merely a machine.

Captain Kirk himself was turned into an AI chatbot back in 2021. Actor William Shatner was immortalized by StoryFile, a company he part-owns. However, the StoryFile filed for Chapter 11 bankruptcy protection earlier this year.

Related:NASA Scientist Evokes Star Trek Diversity to Enable Interplanetary Travel

Read the original post:
Star Trek Creator's Foundation Offers $1M Prize for AI Promoting Good - AI Business

Domino Data Lab Named a Visionary in the 2024 Gartner Magic Quadrant for Data Science and Machine Learning … – PR Newswire

SAN FRANCISCO, June 18, 2024 /PRNewswire/ --Domino Data Lab, provider of the leading Enterprise AI platform trusted by the largest AI-driven companies, has been named a Visionary in the 2024 GartnerMagic Quadrant for Data Science and Machine Learning Platforms1. The Magic Quadrant Report evaluated 18 vendors based on their Completeness of Vision and Ability to Execute, with Domino being positioned as one of three vendors in the Visionaries Quadrant. Visionaries understand where the market is going or have a vision for changing market rules.2

For Domino, being named a Visionary affirms the company's strong position in a rapidly evolving market, its continued innovation, business and ecosystem growth, and strong market traction with enterprises. Customers use Domino to solve the most complex life sciences, financial services, public sector, and insurance challenges.

"To us, the Gartner recognition of Domino as a Visionary validates our commitment to helping the world's most advanced enterprises accelerate the impact of AI." said NickElprin, CEO of Domino Data Lab. "Amidst a new era of AI techniques and a dynamic landscape of security and regulatory requirements, Domino remains the definitive platform for enterprises where AI plays a mission-critical role."

Domino offers unmatched support for enterprises that require robust AI security and governance. Its unique platform flexibility also makes Domino the platform of choice for enterprise-wide AI development and deployment across various environments using a wide array of tools. Domino's enhanced Generative AI and Responsible AI capabilities expand its appeal and empower more enterprises to adopt transformative AI solutions.

Domino's recent platform enhancements include leading-edge innovations such as:

Gartner clients can access the report here: https://www.gartner.com/interactive/mq/5509595?ref=solrResearch&refval=416864281.

Additional Resources

Gartner DisclaimerGartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in theU.S. and internationally, and MAGIC QUADRANT is a registered trademark of Gartner, Inc. and/or its affiliates and are used herein with permission. All rights reserved

About Domino Data LabDomino Data Lab empowers the largest AI-driven enterprises to build and operate AI at scale. Domino's Enterprise AI platform unifies the flexibility AI teams want with the visibility and control the enterprise requires. Domino enables a repeatable and agile ML lifecycle for faster, responsible AI impact with lower costs. With Domino, global enterprises can develop better medicines, grow more productive crops, develop more competitive products, and more. Founded in 2013, Domino is backed by Sequoia Capital, Coatue Management, NVIDIA, Snowflake, and other leading investors. Learn more at http://www.domino.ai.

1 Gartner, Magic Quadrant for Data Science and Machine Learning Platforms,Afraz Jaffri, Aura Popa, Peter Krensky, Jim Hare, Raghvender Bhati, Maryam Hassanlou, Tong Zhang, 17 June 2024.

2 Gartner, Research Methodologies, "Magic Quadrant",https://www.gartner.com/en/research/methodologies/magic-quadrants-research

SOURCE Domino Data Lab

Visit link:
Domino Data Lab Named a Visionary in the 2024 Gartner Magic Quadrant for Data Science and Machine Learning ... - PR Newswire

Stellantis’ VP of AI, Algorithms, and Machine Learning Has Resigned – Mopar Insiders

Berta Rodriguez-Hervas, Stellantis Vice President of Artificial Intelligence, Algorithms, and Machine Learning Operations, has resigned from the automaker. She is the latest in a series of high-level executives to leave Stellantis. A company spokesperson confirmed her departure and stated that Rodriguez-Hervas decided to pursue a new opportunity.

According to her LinkedIn profile, Rodriguez-Hervas joined Stellantis in January 2022 from computer chip maker Nvidia. In a video released by the company last year, Rodriguez-Hervas described Stellantis as the right mix between tradition and innovation.

She has also worked for Tesla Inc. on its Autopilot driver-assist technology. She was a doctoral researcher with Mercedes-Benz on its safety research team, where she focused on machine learning and radar systems.

Stellantis maintains high talent density, and we remain committed to talent development and succession planning throughout the year to ensure continuity, the spokesperson said in a statement to Automotive News. Our strong team is well-equipped to continue the excellent work achieved so far.

According to the spokesperson, Stellantis anticipates that its next-generation technology platforms, including STLA AutoDrive, STLA Brain, and STLA SmartCockpit, will be ready by the end of 2024. Rodriguez-Hervas detailed the AutoDrive system on June 12 during a software demo at the automakers proving grounds in Chelsea, Michigan.

Stellantis claims AutoDrive leverages the capabilities of STLA Brain and STLA SmartCockpit to deliver useful and continuously updated Advanced Driver Assistance System technology that is intuitive, robust, and inspires driver confidence.

Source: Automotive News

See the rest here:
Stellantis' VP of AI, Algorithms, and Machine Learning Has Resigned - Mopar Insiders

Upcoming Opportunities in Video Annotation Service for Machine Learning Market: Future Trend and Analysis of Key … – openPR

Video Annotation Service for Machine Learning Market

Moreover, The report provides a professional in-depth examination of the Video Annotation Service for Machine Learning Market's current scenario, CAGR, gross margin, revenue, price, production growth rate, volume, value, market share, and growth are among the market data assessed and re-validation in the research. The report will also cover key agreements, collaborations, and global partnerships soon to change the dynamics of the market on a global scale. Detailed company profiling enables users to evaluate company shares analysis, emerging product lines, the scope of New product development in new markets, pricing strategies, innovation possibilities, and much more.

Get a Sample Copy of This Report at: https://www.worldwidemarketreports.com/sample/1018592

The purpose of this market analysis is to estimate the size and growth potential of the market based on the kind of product, the application, the industry analysis, and the area. Also included is a comprehensive competitive analysis of the major competitors in the market, including their company profiles, critical insights about their product and business offerings, recent developments, and important market strategies.

The Leading Players involved in the global Video Annotation Service for Machine Learning market are:

iMerit HabileData Keymakr Sama Mindy Support DamcoGroup Anolytics.ai TaskUs AI Wakforce Infosearch BPO Infosys Cogito DIGI-TEXX Smartone Maxicus SunTec.AI Kotwel GTS Pixta AI

Video Annotation Service for Machine Learning Market Segments:

According to the report, the Video Annotation Service for Machine Learning Market is segmented in the following ways which fulfill the market data needs of multiple stakeholders across the industry value chain -

Segmentation by Type:

2D Video Annotation Service 3D Video Annotation Service

Segmentation by Applications:

Autonomous Vehicles Healthcare and Medical Imaging Retail Sports and Entertainment Agriculture Manufacturing and Industrial Automation Others

Trends and Opportunities of the Global Video Annotation Service for Machine Learning Market:

The global Video Annotation Service for Machine Learning market has seen several trends in recent years, and understanding these trends is crucial to stay ahead of the competition. The global Video Annotation Service for Machine Learning market also presents several opportunities for players in the market. The increasing demand for Video Annotation Service for Machine Learning in various industries presents several growth opportunities for players in the market.

Regional Outlook:

The following section of the report offers valuable insights into different regions and the key players operating within each of them. To assess the growth of a specific region or country, economic, social, environmental, technological, and political factors have been carefully considered. The section also provides readers with revenue and sales data for each region and country, gathered through comprehensive research. This information is intended to assist readers in determining the potential value of an investment in a particular region.

North America: USA, Canada, Mexico, etc. Asia-Pacific: China, Japan, Korea, India, and Southeast Asia The Middle East and Africa: Saudi Arabia, the UAE, Egypt, Turkey, Nigeria, and South Africa Europe: Germany, France, the UK, Russia, and Italy South America: Brazil, Argentina, Columbia, etc.

Research Methodology:

Research Objectives: This section provides an overview of the research study's primary objectives, encompassing the research questions and hypotheses that will be addressed. Research Design: The following section presents the comprehensive outline of the research design, encompassing the selected approach for the study (quantitative, qualitative, or mixed-methods), the methodologies utilized for data collection (surveys, interviews, focus groups), and the sampling strategy employed (random sampling, stratified sampling). Data Collection: This section involves gathering information from primary and secondary sources. Primary sources included the use of survey questionnaires and interview guides, while secondary sources encompassed existing data from reputable publications and databases. Data collection procedures involved meticulous steps such as data cleaning, coding, and entry to ensure the accuracy and reliability of the collected data Data Analysis: The data were analyzed using various methods including statistical tests, qualitative coding, and content analysis. Limitations: The study's limitations encompass potential biases, errors in data sources, and overall data constraints.

Highlights of the Report:

For the period 2024-2031, accurate market size and compound annual growth rate (CAGR) predictions are provided. Exploration and in-depth evaluation of growth potential in major segments and geographical areas. Company profiles of the top players in the global Video Annotation Service for Machine Learning Market are provided in detail. Comprehensive investigation of innovation and other market developments in the global Video Annotation Service for Machine Learning Market. Industry value chain and supply chain analysis that is dependable. A thorough examination of the most significant growth drivers, limitations, obstacles, and future prospects is provided.

Following are Some of the Most Important Questions that are Answered in this Report:

What are the most important market laws governing major sections of the Video Annotation Service for Machine Learning Market? Which technological advancements are having the greatest influence on the anticipated growth of the worldwide market for Video Annotation Service for Machine Learning Market? Who are the top worldwide businesses that are now controlling the majority of the Video Annotation Service for Machine Learning Market? What kinds of primary business models do the primary companies in the market typically implement? What are the most important elements that will have an impact on the expansion of the Video Annotation Service for Machine Learning Market around the world? How do the main companies in the environment of the global Video Annotation Service for Machine Learning Market integrate important strategies? What are the present revenue contributions of the various product categories on the worldwide market for Video Annotation Service for Machine Learning Market, and what are the changes that are expected to occur?

Reason to Buy

Save and reduce time carrying out entry-level research by identifying the growth, size, leading players, and segments in the global Video Annotation Service for Machine Learning Market. Highlights key business priorities in order to guide the companies to reform their business strategies and establish themselves in the wide geography. The key findings and recommendations highlight crucial progressive industry trends in the Video Annotation Service for Machine Learning Market, thereby allowing players to develop effective long-term strategies in order to garner their market revenue. Develop/modify business expansion plans by using substantial growth offerings in developed and emerging markets. Scrutinize in-depth global market trends and outlook coupled with the factors driving the market, as well as those restraining the growth to a certain extent. Enhance the decision-making process by understanding the strategies that underpin commercial interest with respect to products, segmentation, and industry verticals.

Buy this report and Get Up to % Discount At: https://www.worldwidemarketreports.com/promobuy/1018592

Stay ahead of the curve and drive your business forward with confidence. The Future of Industries report is your indispensable resource for navigating the ever-evolving business landscape, fueling growth, and outperforming your competition. Don't miss this opportunity to unlock the strategic insights that will shape your company's future success.

Author Bio:

Money Singh is a seasoned content writer with over four years of experience in the market research sector. Her expertise spans various industries, including food and beverages, biotechnology, chemical and materials, defense and aerospace, consumer goods, etc. (https://www.linkedin.com/in/money-singh-590844163)

Contact Us:

Mr. Shah Worldwide Market Reports, Tel: U.S. +1-415-871-0703 U.K. +44-203-289-4040 Japan +81-50-5539-1737 Email: sales@worldwidemarketreports.com Website: https://www.worldwidemarketreports.com/

About WMR:

Worldwide Market Reports is your one-stop repository of detailed and in-depth market research reports compiled by an extensive list of publishers from across the globe. We offer reports across virtually all domains and an exhaustive list of sub-domains under the sun. The in-depth market analysis by some of the most vastly experienced analysts provides our diverse range of clients from across all industries with vital decision-making insights to plan and align their market strategies in line with current market trends.

This release was published on openPR.

Read more:
Upcoming Opportunities in Video Annotation Service for Machine Learning Market: Future Trend and Analysis of Key ... - openPR

AI-Driven Automation is Transforming Manufacturing and Overcoming Key Challenges in the Industry – Quality Magazine

In the ever-evolving landscape of manufacturing and automation, the quest for efficiency, quality, and flexibility remains paramount. However, achieving these goals has become increasingly complex due to a myriad of challenges faced by modern manufacturing facilities. Fortunately, advancements in artificial intelligence (AI) and machine learning technologies offer a beacon of hope, promising to revolutionize industrial automation and address these challenges head-on.

Manufacturers today grapple with the pressing need to predict manufacturing performance with unparalleled precision. Rising operating costs, including energy and software license expenses, coupled with the escalating costs of quality errors such as product recalls, underscore the urgency for solutions that optimize process efficiency. This imperative for efficiency gains drives the heightened interest in AI and machine learning technologies.

Generative AI and machine learning tools are particularly appealing as they offer insights into the underlying relationships within manufacturing processes. By demystifying these relationships, algorithms empower teams to unlock previously underutilized assets and enhance overall operational efficiency. Ultimately, the central question guiding manufacturing endeavors is: How can we do more with less?

While AI adoption in manufacturing is still in its nascent stages, pioneering facilities have begun integrating AI into their operations. These early adopters, equipped with robust data infrastructure and a culture of continuous improvement, leverage AI for anomaly detection and predictive maintenance. By analyzing real-time data streams, AI algorithms can detect deviations from the ideal state and enact proactive measures to maintain process integrity.

Using data of stable processes to confidently address the limitations of a production line. This benefit can manifest itself in efficiency improvements, such as predictive maintenance rather than reactive repairs. Furthermore, it can increase quality by finding the relationships between raw material batches from specific upstream vendors and desired production metrics. As well as increase flexibility by empowering automation to both read and write data for production lot sizes of one. Where the verification of tasks that adhere to pre-planned work instructions can ensure that the entire data for the lot is complete before a product leaves a specific work cell. This flexibility can further manifest itself by challenging the sequential dependencies of the specific tasks, allowing each lot size of one to each be completed in the most efficient manner. Which maximizes output regardless of the mix of product to allow facilities to consistently meet production quotas.

However, widespread AI deployment in industrial automation faces hurdles, including the lack of standardized data aggregation frameworks and the absence of scalable deployment networks. Bridging these gaps is essential to unlock AIs full potential in manufacturing.

When outlining the deployment of AI, regardless of the AI being generative and trained in an unsupervised manner or the AI being traditional and developed through data mining, it can be helpful to organize the machine learning system into three sections.

The first section is all about the data. A data first architecture enables the data to be aggregated holistically and with substantial granularity. Granularity preserves the context that the data was generated in. All without compromising the performance of the automation on the factory floor. The second section is the algorithm itself. Whether the algorithm is hosted on edge or in the cloud, this is the actual problem-solving operation. The third section is the neuro network that can deploy the mediation based on the prediction from the data aggregation and the algorithm in real time.

Of course, with the huge leaps forward we have seen in large language models in the consumer space, all the attention is on the second section. The algorithm is often the catalyst for an AI conversion regarding a potential machine learning pilot program.

Major challenges still reside in the first and third sections. Without an automation architecture which can aggregate data with a high degree of resolution and transport the data securely in the format which the algorithm requires, then a valuable algorithm cannot be built through data mining nor through reinforced learning. Without a neuro network to deploy a mediation or an avenue to collaborate with the tribal knowledge on the factory floor, then the process cannot benefit from the great leaps forward in algorithm development. Currently, we are seeing gaps in the first and third sections which need to be addressed before algorithm development can start.

When addressing these challenges, it begins with a mindset of unifying the automation on the factory floor. A good way to start down that path is to put data first. By looking at data holistically, teams can identify silos within their automation, then work towards a single connection and a single control unit. However, being data first does not mean being blind to the costs of short-sighted data aggregation. Technologies that are incompatible with the current automation architecture, require additional software licenses, compromise machine performance, or introduce additional cyber vulnerabilities should all be scrutinized.

To address these challenges and ensure successful integration of AI technologies into their automation systems, teams have looked to globally open industrial protocols. EtherNet/IP, EtherCAT, and IO Link can all be leveraged to start to reduce complexity on the factory floor while aligning with currently used protocols in native automation systems. When integrating or even updating automation to address these challenges, teams should start with a section of the plant floor at a time. Where upgrading a section of the plant floor at a time minimizes the risk to overall production by reducing the vulnerability of plant wide downtime through proper production planning. Starting small also creates an increased reservoir of spare parts for consumption elsewhere in the plant. This extends the transition period, allowing for more time to train maintenance and production teams.

Looking ahead, the future of AI-driven automation holds immense promise for manufacturers. AI technologies will continue to evolve, enabling algorithms to discern intricate relationships within manufacturing processes and optimize resource allocation. As AI algorithms become more specialized and adept at identifying analogies and patterns, manufacturers can expect unparalleled efficiency gains and competitive advantages.

In conclusion, AI and machine learning technologies represent a paradigm shift in industrial automation, offering manufacturers unprecedented opportunities to enhance efficiency, quality, and flexibility. By embracing AI-driven automation solutions and overcoming integration challenges, manufacturers can unlock the full potential of AI to propel their operations into the future.

Visit link:
AI-Driven Automation is Transforming Manufacturing and Overcoming Key Challenges in the Industry - Quality Magazine

Beginners Guide to Machine Learning Testing With DeepChecks – KDnuggets

DeepChecks is a Python package that provides a wide variety of built-in checks to test for issues with model performance, data distribution, data integrity, and more.

In this tutorial, we will learn about DeepChecks and use it to validate the dataset and test the trained machine learning model to generate a comprehensive report. We will also learn to test models on specific tests instead of generating full reports.

Machine learning testing is essential for ensuring the reliability, fairness, and security of AI models. It helps verify model performance, detect biases, enhance security against adversarial attacks especially in Large Language Models (LLMs), ensure regulatory compliance, and enable continuous improvement. Tools like Deepchecks provide a comprehensive testing solution that addresses all aspects of AI and ML validation from research to production, making them invaluable for developing robust, trustworthy AI systems.

In this getting started guide, we will load the dataset and perform a data integrity test. This critical step ensures that our dataset is reliable and accurate, paving the way for successful model training.

It will take a few second to generate the report.

The data integrity report contains test results on:

Lets train our model and then run a model evaluation suite to learn more about model performance.

The model evaluation report contains the test results on:

There are other tests available in the suite that didn't run due to the ensemble type of model. If you ran a simple model like logistic regression, you might have gotten a full report.

If you don't want to run the entire suite of model evaluation tests, you can also test your model on a single check.

For example, you can check label drift by providing the training and testing dataset.

As a result, you will get a distribution plot and drift score.

You can even extract the value and methodology of the drift score.

The next step in your learning journey is to automate the machine learning testing process and track performance. You can do that with GitHub Actions by following the Deepchecks In CI/CD guide.

In this beginner-friendly, we have learned to generate data validation and machine learning evaluation reports using DeepChecks. If you are having trouble running the code, I suggest you have a look at the Machine Learning Testing With DeepChecks Kaggle Notebook and run it yourself.

Abid Ali Awan (@1abidaliawan) is a certified data scientist professional who loves building machine learning models. Currently, he is focusing on content creation and writing technical blogs on machine learning and data science technologies. Abid holds a Master's degree in technology management and a bachelor's degree in telecommunication engineering. His vision is to build an AI product using a graph neural network for students struggling with mental illness.

Read more here:
Beginners Guide to Machine Learning Testing With DeepChecks - KDnuggets

Beyond AI: Building toward artificial consciousness Part I – CIO

As the race to deploy artificial intelligence (AI) hits a fever pitch across enterprises, the savviest organizations are already looking athowto achieve artificial consciousnessa pinnacle of technological and theoretical exploration. However, this undertaking requires unprecedented hardware and software capabilities, and while systems are under construction, the enterprise has a long way to go to understand the demandsand even longer before it can deploy them. This piece is the first in a series of three articles outlining the parameters for artificial consciousness.

The hardware requirements include massive amounts of compute, control, and storage. These enterprise IT categories are not new, but the performance requirements are unprecedented. While enterprises have experience deploying compute, control, and storage requirements for Software-as-a-Service (SaaS)-based applications in a mobile-first and cloud-first world, they are learning how to scale these hardware requirements for AI environments and, ultimately, systems that can deliver artificial consciousness nirvana.

It all starts with compute capacity

As Lenovos third annualglobal CIO reportrevealed, CIOs are developing their AI roadmaps now and assessing everything from their organizational support to capacity building to future-forward investment in tech. The first requirement CIOs must meet when considering artificial consciousness is compute capacity, which falls under capacity building. The amount of compute needed is much more intensive than AI or even GenAI given the sheer volume of data required to enable systems that are fully capable of learning and reasoning.

The higher processing power is achieved by leveraging a compute fabric comprised of sophisticated server clusters. This approach is familiar to CIOs that have deployed high-performance computing (HPC) infrastructure. These clusters seamlessly integrate advanced hardware to deliver unparalleled processing power and efficiency.

At the heart of this cluster-based infrastructure configuration is the concept of a pod, meticulously organized to maximize computing density and thermal efficiency. Each pod comprises 16 racks, with each rack housing eight water-cooled serversa configuration that ensures not only optimal performance but also environmental sustainability through the advanced cooling capabilities. These high-powered servers feature 2TB of DDR5 registered DIMM ECC system memory to ensure rapid access to data and combine the direct water cooling with a rear door heat exchanger that captures residual waste heat. These state-of-the-art servers are customizable with the latest GPUs or AI processors available from Nvidia, AMD, or Intel, providing massive parallel computing power for this extremely demanding application.

Each 16-rack pod also includes a Vertiv end-of-row coolant distribution unitan innovative component designed to efficiently manage the thermal dynamics of high-density computing environments and ensure this high-powered hardware operates within safe thermal thresholds. The result is a system that delivers high performance and reliability while also significantly boosting energy efficiency. By reducing the overall cooling power requirements, each pod is both powerful and environmentally conscious.

Laying the foundation for artificial consciousness

The quest to build artificial consciousness is ambitious, as maximizing the groundbreaking algorithms introduces a whole new set of hardware infrastructure requirementsthe first of which is compute power. Once an enterprise scales its processing power, it must also scale its control and storage hardware before it can activate the advanced software stacks and strategic services that will operationalize artificial consciousness. The next article in the series will look at how to build capacity for higher control and storage hardware requirements.

Learn how Lenovo unlocks the power for AI for enterprises.

See original here:
Beyond AI: Building toward artificial consciousness Part I - CIO

Wondershare Filmora 13.5 Unveils Upgraded AI Toolkit for Creators – AiThority

Wondershare has announced the latest update to its industry-leading video editing software, Filmora 13.5. This version introduces upgraded features designed to augment editors creativity and improve efficiency while delivering high-quality professional content.

Filmora 13.5 enhances its text capabilities by introducing a new Curved Text feature, providing users unparalleled control over titles, captions, and subtitles. This update also expands Filmoras AI-powered toolset, complementing its existing content generation features with two notable additions: a new AI Sticker Generator and Voice Cloning for the AI Text-To-Speech feature. These improvements further cement Filmoras position as a versatile video editing platform, catering to the evolving needs of todays creators.

AiThority.com Latest News:WEKA Grows IP Portfolio to Over 100 Patents

Filmora 13.5 enhances its AI Text-To-Speech tool, now featuring advanced voice replication technology. This innovative feature supports 16 languages, breaking the language barrier with a comprehensive range of linguistic options. Within 30 seconds, users can instantly clone and generate a similar voice that replicates speaking speed, intonation, and accent.

The standout AI Sticker Generator expands Filmoras extensive asset generation capabilities, giving users even more creative choices. Users can input text prompts, select a style, and generate unique stickers that can be applied directly to the timeline or exported independently. This feature expands Filmoras asset pool, meeting niche demands and offering creators a comprehensive range of top-tier resources.

Filmora 13.5 also introduces a Curved Text feature, opening new possibilities for creating eye-catching visual effects. This tool is perfect for social media videos, educational content, creative projects, and advertising production. It grants editors unparalleled control over editing text that captures viewers attention and improves engagement.

AiThority.com Latest News:Banuba Revolutionizes Video Editing with AI Clipping SDK for Mobile

These new features cater to a diverse audience, including content creators, freelancers, marketers, influencers, small business owners, and beginners eager to learn video editing. By simplifying complex editing tasks and providing innovative tools, Filmora 13.5 enables users to produce high-quality, professional-looking content more efficiently. Filmora 13.5 continues Wondershares commitment to making cutting-edge technology accessible to everyone, integrating innovative AI functions with an intuitive user interface to empower creators to bring their visions to life.

[To share your insights with us as part of editorial or sponsored content, please write topsen@itechseries.com]

Read more:
Wondershare Filmora 13.5 Unveils Upgraded AI Toolkit for Creators - AiThority

Multimodal AI: Turning a One-Trick Pony into Jack of All Trades – InformationWeek

Just when you think artificial intelligence could not do more to reduce mundane workloads, create content from scratch, sort through massive amounts of data to derive insights, or identify anomalies on an X-ray, along comes multimodal AI.

Until very recently, AI was mostly focused on understanding and processing singular text or image-based information a one-trick pony, so to speak. Today, however, theres a new entrant into the world of AI, a true jack of all trades in the form of multimodal AI. This new class of AI involves the integration of multiple modalities -- such as images, videos, audio, and text, able to process multiple data inputs.

What multimodal AI really delivers is context. Since it can recognize patterns and connections between different types of data inputs, the output is richer and more intuitive, getting closer to multi-faceted human intelligence than ever before.

Just as generative AI (GenAI) has done over the past year, multimodal AI promises to revolutionize almost all industries and bring a whole new level of insights and automation to human-machine interactions.

Already many Big Tech players are volleying to dominate multimodal AI. One of the most recent players is X (formerly Twitter), which launched Grok 1.5, which it claims outperforms its competitors when it comes to real-world spatial understanding. Other players include Apple MM1, Anthropic Claude 3, Google Gemini, Meta ImageBind and OpenAI GPT 4.

Related:Help Your C-Suite Colleagues Navigate Generative AI

While AI comes in many forms -- from machine learning and deep learning -- to predictive analytics and computer vision, the real showstopper for multimodal AI is computer vision. With multimodal AI, computer visions capabilities go far beyond simple object identification. With the ability to combine many types of data, the AI solution can understand the context of an image and make more accurate decisions. For example, the image of a cat, combined with audio of a cat meowing, gives it greater accuracy when identifying all images of cats. In another example, an image of a face, when combined with video can help AI not only identify specific people in photos, but greater contextual awareness.

Use cases for multimodal AI are just beginning to surface, and as it evolves it will be used in ways not even imaginable today. Consider some of the ways it is or could be applied:

Ecommerce. Multimodal AI could analyze text, images and video in social media data to tailor offerings to specific people or segments of people.

Automotive. Multimodal AI can improve the capabilities and safety of self-driving carsby combining data from multiple sensors, such as cameras, radar or GPS systems, for heightened accuracy.

Healthcare. It can use data from images and scans, electronic health records and genetic testing resultsto assist clinicians in making more accurate diagnoses. As well as more personalized treatment plans.

Finance. It can enable heightened risk assessment by analyzing data in various formats to get deeper insights and understanding of specific individuals and their risk level for mortgages, etc.

Conservation. Multimodal AI could identify whales from satellite imagery, as well as audio of whale sounds to track migration patterns and changing feeding areas.

Related:The AI Skills Gap and How to Address It

Multimodal AI is an exciting development,but it still has a long way to go. A fundamental challenge lies in integrating information from disparate sources cohesively. This involves developing algorithms and models capable of extracting meaningful insights from each modality and integrating them to generate comprehensive interpretations.

Another challenge is the scarcity of clean, labeled multimodal datasets for training AI models. Unlike single-modality datasets, which are more plentiful, multimodal datasets require annotations that capture correlations between different modalities, making their creation more labor-intensive and resource-intensive. Yet achieving the right balance between modalities is crucial for ensuring the accuracy and reliability of multimodal AI systems.

Related:AI, Data Centers, and Energy Use: The Path to Sustainability

As with other forms of AI, ensuring unbiased multimodal AI is a key consideration made more difficult because of the varied types of data. Regardless, diverse types of images, text, video, and audio need to be factored into the development of solutions, as well as the biases that can arise from the developers themselves.

Data privacy and protection also need to be considered, given the vast amount of personal data that multimodal AI systems may process. Questions could arise about data ownership, consent, and protection against misuse, when humans are not fully in control of the output of AI.

Addressing these ethical challenges requires a collaborative effort involving developers, government, industry leaders, and individuals. Transparency, accountability, and fairness must be prioritized throughout the development lifecycle of multimodal AI systems to mitigate their risks and foster trust among users.

Multimodal AI is bringing the capabilities of AI to new heights, enabling richer and deeper insights than previously possible. Yet, no matter how smart AI becomes, it can never replace the human mind and its many facets of knowledge, intuition, experience and reasoning -- AI still has a long way to go to achieve that, but its a start.

Read more:
Multimodal AI: Turning a One-Trick Pony into Jack of All Trades - InformationWeek

Machine learning and hydrodynamic proxies for enhanced rapid tsunami vulnerability assessment | Communications … – Nature.com

Synthetic variables for shielding mechanism and debris impact as proxies for water velocity

To comprehensively analyze the individual contributions of the three approaches for accounting for water velocity, we systematically trained different eXtra Trees (XT) models33, each featuring a unique combination of input variables. The reference scenario (ID0) serves as both the initial benchmark and foundational baseline, encompassing the minimum set of variables retained across all subsequent scenarios. This baseline incorporates only basic input variables sourced from the original MLIT database, further enriched with some of the geospatial variables introduced by Di Bacco et al. characterized by the most straightforward computation23. Subsequently, the additional models are generated by iteratively introducing velocity-related (directly or indirectly) features into the model. This stepwise approach allows us to isolate the incremental improvements in predictive accuracy attributed to each individual component under consideration. Table 1 in Methods offers a concise overview of all tested variables, with those included in the reference scenario highlighted in italics.

The core results of the analysis aimed at assessing the predictive performance variability among the various trained models are summarized in Fig.1, which illustrates the global average accuracy (expressed in terms of hit rate (HR) on the test set) achieved by each model across ten training sessions. In the figure, each column represents a specific combination of input features, with x markers indicating excluded variables during each model training. Insights into the importance of individual input features on the models predictive performance are provided by the circles, the size of which corresponds to the mean decrease in accuracy (mda) when each single variable is randomly shuffled.

Circle size reflects the mean decrease in accuracy (mda) when individual variables are shuffled and x markers indicate excluded variables in model training.

The pair plot in Fig.2, illustrating the correlations and distributions among considered velocity-related variables as well as Distance across the seven damage classes in the MLIT dataset, has been generated to support the interpretation of the results and enrich the discussion. This graphical representation employs scatter plots to display the relationships between each pair of variables, while the diagonal axis represents kernel density plots for the individual features.

The pie chart summarizes the distribution of the various damage states within the dataset (shades from light pink to violet). The pair plot displays the relationships between each pair of variables, while the diagonal axis represents kernel density plots for the individual features.

The baseline model (ID0), established as a reference due to its exclusion of any velocity information, attains an average accuracy of 0.836. In ID1, the model exclusively incorporates the direct contribution of vsim, resulting in a modest improvement, with accuracy reaching 0.848. The subsequent model, ID2, closely resembling ID1 but replacing vsim with vc, demonstrates a decline in performance, with an accuracy value of 0.828. This decrease is attributed to the redundancy between vc and inundation depth (h), both in their shared importance as variables and in the decrease of hs importance compared to the previous case. Essentially, when both variables are included, the model might become confused because h, which could have been a relevant variable when introduced alone, may now appear less important due to the addition of vc, which basically provides the same information in a different format.

The analysis proceeds with the introduction of buffer-related proxies to account for possible dynamic water effects on damage. Initially, we isolate the effect of the two considered mechanisms: the shielding (ID3) exerted by structures within the buffers (NShArea and NSW) and the debris impact (NDIArea, ID4). In both instances, we observe an enhancement in accuracy, with values reaching 0.877 and 0.865, respectively. Their combined effect is considered in model ID5, yielding only a marginal overall performance improvement (0.878), due to the noticeable correlation between NShArea and NDIArea, especially for the more severe damage levels (Fig.2), with the two variables sharing their overall importance. Combination ID6, with the addition of vc, does not exhibit an increase in accuracy compared to the previous model (0.871), thus confirming the redundant contribution of a variable directly derived from another.

In the subsequent three input feature combinations, we explore the possible improvements in accuracy through the inclusion of vsim in conjunction with the considered proxies. In the case of ID7, where vsim is combined solely with shielding effect, no enhancement is observed (0.870) compared to the corresponding simple ID3. Similarly, when replacing shielding with the debris proxy (ID8), an overall accuracy of 0.867 is achieved, closely resembling the performance of ID4, lacking direct velocity input. The highest accuracy (0.889) is instead obtained when all three contributions are included simultaneously. Hence, the inclusion of vsim appears to result only in a marginal enhancement of model performance, with also an overall lower importance compared to the considered two proxies. From a physical perspective, albeit without a noticeable correlation between the data points of vsim and NShArea (Fig.2), this result can be explained by recognizing that flow velocity indirectly encapsulates the shielding effect arising from the presence of buildings, which are typically represented in hydrodynamic models as obstructions to wave propagation or through an increase in bottom friction for urban areas8,34,35,36. Since this alteration induced by the presence of buildings directly influences the hydrodynamic characteristics of the tsunami on land, the resulting values of vsim offer limited additional improvement to the models predictive ability compared to what is alreadyprovided by h and NShArea. Moreover, the very weak correlation of the considered proxies with the primary response variable h (Fig.2) reinforces their importance in the framework of a machine learning approach, since they provide distinct input information compared to flow velocity, which, instead, is directly related to h, as discussed for vc. Such observations then support the idea of regarding these proxies as suitable variables for capturing dynamic water effects on buildings.

In all previous combinations, observed field values (hMLIT) served as the primary data source for inundation depth information. However, for a more comprehensive analysis, we also introduced feature combination ID10, similar to ID9 but employing simulated inundation depths (hsim) in place of hMLIT. This model achieves accuracy levels comparable to its counterparts and exhibits a consistent feature importance pattern, albeit with a slight increase in the importance of the Distance variable.

For completeness, normalized confusion matrices, describing hit and misclassification rates among the different damage classes, are reported in Supplementary Fig.S1. These matrices reveal uniform error patterns across all models, with Class 5 consistently exhibiting higher misclassification rates, as a result of its underrepresentation in the dataset, as illustrated in Fig.2. Concerning the potential influence of such dataset imbalance on the results, it is worth noting that, for the primary aim of this study, it does not alter the overall outcomes in terms of relative importance of the various features on damage predictions, as affecting all trained models in the same way.

Delving further into the analysis of the results, the objective shifts toward gaining a thorough understanding of the relationships between the variables influencing the damage mechanisms. Indeed, while we have shown that the inclusion of water velocity components or the adoption of a more comprehensive multi-variable approach enhances tsunami damage predictions, machine learning algorithms have often been criticized for their inherent black-box nature30,31,32.

To address this challenge, we have chosen to embrace the concept of explanation through visualization by illustrating how it remains possible to derive explicit and informative insights from the outcomes derived from a machine learning approach, all while embracing the inherent complexity arising from the multi-variable nature of the problem at hand.

The results of trained models are then translated into the form of traditional fragility functions, expressing the probability of exceeding a certain damage state as a function of inundation depth, for fixed values of the feature under investigation, distinguished for velocity-related (Fig.3), site-dependent (Fig.4) and structural building attributes (Fig.5). In addition to the central value, the derived functions incorporate the 10th90th confidence intervals to provide a comprehensive representation of predictive uncertainty associated with them.

Fragility functions for fixed values of a direct velocity information (vsim), b proxy for shielding effect (NShArea) and c proxy for debris impact (NDIArea). The median fragility function is represented as a solid line, while the shaded area represents the 10th90th confidence interval.

Fragility functions for fixed values of a coastal typology (CoastType) and b distance from the coastline (Distance). The median fragility function is represented as a solid line, while the shaded area represents the 10th90th confidence interval.

Fragility functions for fixed values of a structural type (BS) and b number of floors (NF). The median fragility function is represented as a solid line, while the shaded area represents the 10th90th confidence interval.

Starting with the analysis of the fragility functions obtained for fixed values of velocity-related variables (Fig.3), it is possible to observe the substantial impact of the hydrodynamic effects, especially in more severe inundation scenarios. Notably, differences in the median fragility functions for the more damaging states (DS5) are only evident when velocity reaches high values (around 10m/s), while those for 0.1 and 2m/s are practically overlapping, albeit featuring a wide uncertainty band, demonstrating how the several additional explicative variables included into the model affect the damage process. More pronounced differences in the fragilities become apparent for lower damage states, under shallower water depths (h<2m) and slower flow velocities, although a substantial portion of the predictive power in non-structural damage scenarios predominantly relies on the inundation depth8,11,13. The velocity proxy accounting for the shielding effect (NShArea) mirrors the behavior observed for vsim, but with greater variability for DS7.

For instance, the probability of reaching DS7 with an inundation depth of 4m drops from ~70% for an isolated building (NShArea=0) to roughly 40% for one located in a densely populated area (NShArea=0.5). This substantial variation not only highlights the influence of this variable for describing the damage mechanism, but also explains its profound impact on the models predictive performance shown in Fig.1. Conversely, for less severe DS, the central values of the three considered fragility functions tend to converge onto a single line, indicating that the shielding mechanism primarily influences the process leading to the total destruction of buildings. Distinct patterns emerge for the velocity proxy related to debris impact (NDIArea), particularly for DS5, emphasizing its crucial role in predicting relevant structural damages.

For example, at an inundation depth of 4m, the probability of reaching DS7 is 40% when NDIArea=0 (i.e., no washed-away structures in the buffer area for the considered building), but it rises to ~90% when NDIArea=0.3 (i.e., 30% of the buffer area with washed-away buildings). Moreover, similarly to NshArea, the width of the uncertainty band generally narrows with decreasing damage state, thus suggesting that inundation depth acts as the main predictor for low entity damages. These results represent an advancement beyond the work of Reese et al.26, who first attempted to incorporate information on shielding and debris mechanisms into fragility functions based on a limited number of field observations for the 2009 South Pacific tsunami, and Charvet et al.8, who investigated the possible effect of debris impacts (through the use of a binary variable) on damage levels for the 2011 Great East Japan event.

Concerning morphological variables, Fig.4 well represents the amplification effect induced by ria-type coasts, especially for the higher damage states, consistently with prior literature8,11,13,37,38. However, above 6m, the median fragility curve for the plain coastal areas exceeds that of the ria-type region, in line with findings by Suppasri et al.37,38, who also described a similar trend pattern. Nevertheless, it is worth observing that the variability introduced by other contributing features muddles the differences between the two coastal types, with the magnitude of the uncertainty band almost eclipsing the noticeable distinctions in the central values. This observation highlights the imperative need to move beyond the use of traditional univariate fragility functions, in favor of multi-variable models, intrinsically capable of taking these complex interactions into account. Distance from the coast has emerged as a pivotal factor in predictive accuracy (Fig.1) and this is also evident in the corresponding fragility functions computed for Distance values of 170, 950 and 2600m (Fig.4). Obviously, a clear negative correlation exists between Distance and inundation depth (Fig.2), with structures closer to the coast being more susceptible to damage, especially in case of structural damages. In detail, more pronounced differences in the fragility patterns are observed for DS5 and DS6, where the probability of exceeding these damage states with a 2m depth is almost null for buildings located within a distance of 1km from the coast, while it increases to over 80% for those in close proximity to the coastline. This mirrors the observations resulting for NDIArea (Fig.3), where greater distances result in less damage potential from washed-away buildings.

Figure5 illustrates the fragility functions categorized by structural types (BS) and building characteristics represented in terms of NF. Overall, the observed patterns align with the findings discussed in the preceding figures. When focusing on the median curves, it becomes evident that these features exert minimal influence on the occurrence of non-structural damages, with overlapping curves and relatively narrow uncertainty bands for DS5, owing to the mentioned dominance of inundation depth as main damage predictive variable in such cases.

However, for the more severe damage states, distinctions become more marked. Reinforced-concrete (RC) buildings exhibit lower vulnerability, followed by steel, masonry and wood structures, with the latter two showing only minor differences among them. A similar trend is also evident for NF, with taller buildings being less vulnerable than shorter ones under severe damage scenarios. The most relevant differences emerge when transitioning from single or two-story buildings to multi-story dwellings. However, once again, it is worth noting that, beyond these general patterns, also highlighted in previous studies1,5,8,11,26,34,37, the influence of other factors tends to blur the distinctions among the central values of the different typologies, as visible, for instance, for the confidence interval for steel buildings, which encompasses both median fragility functions for wood and masonry structures.

More here:
Machine learning and hydrodynamic proxies for enhanced rapid tsunami vulnerability assessment | Communications ... - Nature.com