Page 1,038«..1020..1,0371,0381,0391,040..1,0501,060..»

Perspective: Can artificial intelligence teach us to be better workers? – Yahoo News

So-called soft skills are in short supply in the workforce and society at large. Could artificial intelligence help us get better? | Adobe.com

A few weeks ago, McKinsey & Company published updatedestimateson when key anticipated characteristics of artificial intelligence might arrive including things like creativity, logical reasoning, and social/emotional reasoning, sensing and output. McKinseys timeline for increased capacity across a range of such capabilities has moved sharply forward.

The reason for this seismic shift?Generative AI, the technology taking the world by storm in the form of chatbots like ChatGPT and Claude and image-generators like DALL-E.

McKinsey now estimates that AI will reachtop-quartilehuman ability in creativity, natural language generation and understanding social and emotional reasoning and sensingan astonishing 20 to 25 years earlierthan its previous estimates in 2017.

By themid-2030s, McKinsey predicts, AI will likely be more proficient thanthree-quartersof the human workforce in these so-called soft skills and many others, including coordination with multiple agents, logical reasoning and problem solving, output articulation and presentation, generating novel patterns and categories, sensory perception, and social and emotional output.

So how might such developments affect actual jobs?

McKinsey says that AI natural language abilities are increasing the potential to automate decision-making, collaboration and the application of expertise in the workforce. In other words, AI is gearing up to transform (and, in some cases, fully automate) the jobs of knowledge workers, professionals and creatives jobs and skills that previously looked out of AIs reach for decades to come.

And, bear in mind, additional acceleration based on improved large language models, better hardware and other efficiency improvements may, and likely will, continue to pull these timeframes forward.

Its important to remember, however, that just because jobs change doesnt mean they go away. We have a labor shortage driven by economic growth, a higher ratio of retirees to active workers, and declining immigration. These longer-term trends suggest we will need technology, including AI, to increase the capacity of humans in the workforce. If so, our biggest problem in the future may not be too much automation, but too little.

Story continues

Related

Nevertheless, McKinseys estimates on social-emotional capacities is fodder for those already inclined to despair of the future. If machines beat us at reading, understanding and imitating these unique human characteristics, whats left for us to do? Wont this bring us one step closer to a world where unique human qualities are eclipsed by or confused with machine characteristics? However unlikely such outcomes are, theres no sense in pretending they are impossible.

Still, it is equally possible that better machine social-emotional capacities are precisely what we need at this moment in human development.

As Ive writtenelsewhere, social-emotional or soft skills, are the biggest and most important deficit in the workforce, and, I would argue, in society at large. They both form our capacity to learn and are crucial to success and advancement on the job and in life more generally. Soft-skill shortages help feed social conflict and immiserate individuals, families and communities by reducing our capacity to live with and resolve conflict.

Rather than threatening our livelihoods, perhaps advances in AI social-emotional capacities are part of the solution to our soft-skills gap. A recent Stanford Universitystudyfound use of chat technology dramatically raised job performance among lower-skilled customer service representatives, in large part by helping them better manage social interactions with frustrated callers. If we conceive of soft-skill deficits as a form of cognitive impairment or shortcomings, AI may turn out to be a kind of assistive technology that helps human beings who have difficulty reading and responding to other people.

Related

There may be those who recoil from the idea that this technology might be used as a cognitive intervention. Id invite them to think about how we use technology to help people with physical limitations. We wouldnt deny a wheelchair to someone who cant walk. Likewise, we shouldnt deny a cognitively or emotionally impaired person an electronic coach that could help them live a better, fuller life.

Brent Orrell is a senior fellow at the American Enterprise Institute, where he works on job training, workforce development and criminal justice reform.

Here is the original post:
Perspective: Can artificial intelligence teach us to be better workers? - Yahoo News

Read More..

Artificial Intelligence and Machine Learning Can Revolutionize … – Fagen wasanni

The Nigerian Communications Commission (NCC) has highlighted the potential of Artificial Intelligence (AI) and Machine Learning (ML) to revolutionize various industries. The Executive Vice Chairman of NCC, Prof. Umar Danbatta, made this statement at the 2023 ICTEL Expo organized by the Lagos Chamber of Commerce and Industry (LCCI). The theme of the event was Tech Disruption: Transforming Industries with Innovation.

According to Danbatta, AI and ML technologies have the power to shape sectors such as healthcare, finance, manufacturing, and transportation. He mentioned that AI-powered algorithms enable accurate predictions, improved decision-making, and automation of mundane tasks. By analyzing vast amounts of data, businesses can gain valuable insights and optimize their operations to deliver better products and services.

Danbatta also highlighted the impact of the Internet of Things (IoT) on industries such as agriculture, energy, and logistics. IoT enables resource optimization, equipment monitoring, and overall efficiency improvement through real-time data provided by sensors and smart devices.

Furthermore, Danbatta noted the transformative force of blockchain technology, particularly in finance and supply chain management. Blockchain creates decentralized and transparent ledgers, ensuring secure and efficient transactions while reducing costs and eliminating intermediaries.

The fifth generation network (5G) was also mentioned as an enabler of new possibilities in autonomous vehicles, augmented reality, and telemedicine. The convergence of Virtual Reality (VR) and Augmented Reality (AR) technologies is disrupting multiple industries, especially in entertainment, education, and retail, by offering immersive and interactive experiences.

To embrace innovation and adapt to the changing landscape, businesses are advised to be agile and experiment with emerging technologies. The NCC believes that this disruption and innovation will drive sustainable growth, economic diversification, and enhanced living standards for all Nigerians.

The commissions strategic vision plan, Aspire 2024, prioritizes connectivity and broadband access as vital for socio-economic development. By expanding network coverage and promoting broadband infrastructure deployment, the NCC aims to provide reliable and affordable internet access to every corner of Nigeria.

As of May 2023, telecom subscriptions in Nigeria reached 227,179,946 with a teledensity of 119 percent. The telecom industry contributed 14.13 percent to the GDP in the first quarter of 2023. The NCC also focuses on consumer protection, privacy, data security, and efficient spectrum management to optimize connectivity and facilitate emerging technologies.

By promoting AI, ML, IoT, blockchain, 5G, VR, and AR, the NCC intends to unlock the transformative potential of these technologies and enable new services and applications in Nigeria.

See the original post:
Artificial Intelligence and Machine Learning Can Revolutionize ... - Fagen wasanni

Read More..

Hawley, Blumenthal Hold Hearing On Principles For Regulating … – Josh Hawley

U.S. Senators Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.), Ranking Member and Chair of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, held ahearingon the guiding principles for regulating artificial intelligence (A.I.) moving forward.

Senator Hawley questioned leaders in the A.I. spaceincluding Dario Amodei, Cofounder and CEO of Anthropic, Yoshua Bengio, Professor at the Universit de Montral, and Stuart Russell, Professor of Computer Science at the University of California, Berkeleyon the role of Big Tech in smaller A.I. development firms, the importance of safeguarding our A.I. supply chains, and the issue of offshoring of A.I.-related jobs.

"For my part, I have expressed my own sense of what our priorities ought to be when it comes to legislation. It's very simple: workers, kids, consumers, and national security,"said Senator Hawley."As A.I. develops, we have got to make sure that we have safeguards in place that will ensure this new technology is actually good for the American people."

He continued,"I'm less interested in the corporation's profitability, in fact I'm not interested in that at all. I'm interested in protecting the rights of American workers and American families and American consumers against these massive companies that threaten to become a total law unto themselves."

Watch Senator Hawleys full statements and hearing Q&Ahereor above.

Read the rest here:
Hawley, Blumenthal Hold Hearing On Principles For Regulating ... - Josh Hawley

Read More..

AI-enhanced night-vision lets users see in the dark – Nature.com

In this episode:

There are many methods for better night-vision, but often these rely on enhancing light, which may not be present, or using devices which can interfere with one another. One alternative solution is to use heat, but such infrared sensors struggle to distinguish between different objects. To overcome this, researchers have now combined such sensors with machine learning algorithms to make a system that grants day-like night-vision. They hope it will be useful in technologies such as self-driving cars.

Research article: Bao et al.

News and Views: Heat-assisted imaging enables day-like visibility at night

Benjamin Franklins anti-counterfeiting money printing techniques, and how much snow is on top of Mount Everest really?

Research Highlight: Ben Franklin: founding father of anti-counterfeiting techniques

Research Highlight: How much snow is on Mount Everest? Scientists climbed it to find out

We discuss some highlights from the Nature Briefing. This time, the cost to scientists of English not being their native language, and the mysterious link between COVID-19 and type 1 diabetes.

Nature News: The true cost of sciences language barrier for non-native English speakers

Nature News: As COVID-19 cases rose, so did diabetes no one knows why

Subscribe to Nature Briefing, an unmissable daily round-up of science news, opinion and analysis free in your inbox every weekday.

Never miss an episode. Subscribe to the Nature Podcast on Apple Podcasts, Google Podcasts, Spotify or your favourite podcast app. An RSS feed for the Nature Podcast is available too.

See the article here:
AI-enhanced night-vision lets users see in the dark - Nature.com

Read More..

Artificial Intelligence: Recent Congressional Activity and a Look to … – JD Supra

In the past few months, the American public has become increasingly fixated on artificial intelligence (AI), especially generative artificial intelligence (GenAI), because of the economic and social considerations associated with this developing technology.

AI has inspired contemplation of its potential benefits in the fight against cancer, has become one of the issues in the Hollywood writers and actors strikes, has led a group of tech executives to warn that humans could face extinction from AI, and has led many people to crack jokes, perhaps a bit nervously, about the robots taking over.

While many people in business, medicine, and the arts (to name a few) are contemplating how to harness its capabilities, there is increasing interest among Members of Congress to determine whether and how the federal government can and should regulate AI, especially GenAI. One House Member told us that in the past couple of months, interest in AI at the Member level has gone from zero to 60. Reflecting the concerns that some policymakers share about GenAI, in one recent Senate hearing, Subcommittee Chairman Sen. Richard Blumenthal (D-CT) made the case for regulation by creating his own deepfake using AI and an AI-generated voice (lifted from his speeches) to deliver an AI-generated opening statement that he developed by asking ChatGPT to draft remarks he would make at the beginning of a hearing on AI. Meanwhile, the House of Representatives took the step of laying out guidelines for use of ChatGPT by Members and their staffs for research and evaluation only at this time.

Given the widespread policy implications of AI, we can expect continued Congressional activity in this area. This alert provides an overview of what the current Congress is doing to educate itself and legislate on topics associated with AI.

For the purposes of this alert, we are using the term GenAI to mean the kind of AI that can create new content, like text, images, and video, by learning from pre-existing and publicly available data sources. As our colleagues noted in a June 7 alert on GenAI and legal considerations for the trade association and nonprofit industry, popular examples of GenAI include Open AIs ChatGPT, Github Copilot, DALL-E, HarmonAI, and Runway, which can generate computer code, images, songs, and videos, respectively, with limited human involvement.

The environment for Congressional action on AI is hazy at the moment. While there is great interest in the issue, many of the major players in Congress are trying to address very different problems that AI and GenAI will impact in the coming years. Because the universe of issues is so vast, each Member of Congress seems to have his or her own pet priority in this area. For example, on July 13, 50 Democratic Members wrote the Federal Election Commission (FEC) to express concern about the impact of AI-generated campaign advertisements, particularly those that are fraudulent in nature, and have requested that the FEC begin setting up a framework to regulate AI political ads.

AI has inspired contemplation of the potential benefits of AI in the fight against cancer, has become one of the factors at issue in the Hollywood writers and actors strikes, has led a group of tech executives to warn that humans could face extinction from AI, and has led many people to crack jokes, perhaps a bit nervously about the robots taking over. Companies, trade associations, and nonprofits with a stake in the AI debate and with particular insight to share should be active at this time, focusing on the Members who are most active and on the multiple committees of jurisdiction.

Dan Renberg, Government Relations Practice Co-Leader

The national security implications of AI have caught the attention of many in Congress. For example, on April 19, under the leadership of Chairman Joe Manchin (D-WV), the Senate Armed Services Subcommittee on Cybersecurity held a hearing to receive testimony from outside experts and industry leaders on the state of AI and machine learning applications to improve Department of Defense operations. Expert witnesses in Defense AI highlighted the technical challenges identifying key technologies and integrating them into the system while ensuring that the applications deployed are secure and trusted.

With enormous stakes for the United States, there is a universal appetite in Washington for regulation of AI but no consensus about AI policy, or the regulatory regime to sustain it. The proposals circulating in Congress are merely the starters gun for a debate challenging policymakers and regulators to develop expertise and adapt to rapid tech developments. Key formative decisions about regulatory design are looming that will permanently impact on Americas AI position globally.

Congressman Phil English, Senior Government Relations Advisor

Others are focused on the impact on consumers and disenfranchised populations. Sen. Jon Ossoff (D-GA) has focused his efforts on protecting human rights and ensuring that peoples civil rights are not violated as AI scrapes the web (read our recent Privacy Counsel blog post on increasing lawsuits involving data scraping and GenAI tools). Sen. Chris Coons (D-DE) is focused on the impact of AI on patents, trademarks, and the creative economy. At a June 7 hearing, his Senate Subcommittee on Intellectual Property considered questions such as whether, and how, to compensate artists if GenAI creates a song that sounds like Taylor Swifts music, but is not a sample or carbon copy. At a recent hearing on AI in the same Subcommittee, Sen. Thom Tillis (R-NC) stated that the creative community is experiencing immediate and acute challenges due to the impact of generative AI. Others like Sens. Dick Durbin (D-IL) and Lindsey Graham (R-SC) have focused on the need to protect children from adults who create AI-generated child sexual abuse materials by instructing platforms to create child pornography that uses real faces and AI bodies.

Congressman Jay Obernolte (R-CA) has begun to attract attention as a leading expert on AI because of his professional and educational background, which includes an advanced degree in computer science and a former career as a computer programmer. In addition to being Vice Chair of the Congressional Artificial Intelligence Caucus, Rep. Obernolte recently authored an op-ed column in The Hill in which he provided an overview of multiple policy implications of GenAI, called for industry and government guardrails to prevent misuse of this promising technology, and noted the need to align our nations education system with the changes that AI will bring over time.

Chinas advancement in AI research and technologies has also been a major focal point of discussion in Congress, especially during AI-related hearings. At a June 22 hearing of the House Science, Space, and Technology Committee, Chairman Frank Lucas (R-OK) stated: While the United States currently is the global leader in AI research, development, and technology, our adversaries are catching up. The Chinese Communist Party is implementing AI industrial policy at a national scale, investing billions through state-financed investment funds, designating national AI champions, and providing preferential tax treatment to grow AI startups. We cannot and should not try to copy Chinas playbook. But we can maintain our leadership role in AI, and we can ensure its developed with our values of trustworthiness, fairness, and transparency. To do so, Congress needs to make strategic investments, build our workforce, and establish proper safeguards without overregulation.

We rely on AI every day. It is navigation for our cars, Siri on our iPhone, robotic vacuum cleaners and so much more. But the advance of AI to develop machines that think, reason, and possess intelligence requires us to understand how we prevent building machines with the capability that would threaten human life. Congress and the Administration are beginning to recognize that there are many policy questions that relate to AI, including Generative Artificial Intelligence (GenAI) and Artificial Super Intelligence (ASI). Time is short for us to decide how to regulate AI.

Senator Byron Dorgan, Senior Policy Advisor

There are also big philosophical questions about how and where the government should insert itself in the process of regulating and fostering AI development. Europe has created an AI sandbox, where developers can test out their AI products in a safe environment that allows academics to study the harms, impacts, and other implications. In the US, observers have thus far landed in two camps: (1) advocates for creating a new federal agency to regulate AI; or (2) those who prefer to let the private sector innovate and do what scaled the technology to this point. These viewpoints cross party lines and political ideologies at various intersections. Some free-market Republicans have said that the government can use Section 230 of the Communications Decency Act, which has traditionally been used to manage online speech and moderate social media content, to regulate AI. This set of small government Republicans also thinks that there is no need to create a new agency because Section 230 should suffice. On the left, some policymakers are pushing for a new federal agency to collect data on AI and study this issue in detail. One example is the bill introduced in May by Sens. Michael Bennet (D-CO) and Peter Welch (D-VT) which would establish a Federal Digital Platform Commission that would, among other things, regulate GenAI. This is also the stance of the Biden Administration, which has requested from Congress $2.6 billion for the National Artificial Intelligence Research Resource (NAIRR) Task Force. The Biden Administration also released an AI Bill of Human Rights last fall, which landed with a thud in Washington among the major players.

At the moment, given the novelty of GenAI and the lack of deep technological understanding among some Members of Congress, there is some confusion about the nature of GenAI and the diverse issues it can create. It is a positive development that on the Senate side, to help bring everyone up to speed, Majority Leader Chuck Schumer (D-NY), Sen. Todd Young (R-IN), and others are holding three bipartisan briefings for the entire Senate that will feature academics, major industry players, and government officials. Leader Schumer also laid out a framework on June 21 that explained what he intends for the Senate to focus on regarding AI in the coming months. This follows on the heels of an educational session on AI that Speaker Kevin McCarthy (R-CA) held for Members of the House of Representatives earlier this year and private briefings that other groups of House Members have planned for themselves.

It is worth noting that the European Union has been actively working on a regulatory framework for AI, with the European Parliament approving a massive EU AI Act in mid-June that aims to protect the general public from abuses that could arise through the use of AI. Reactions from US policymakers were mixed, with Sen. Michael Bennet (D-CO) commenting, The United States should be the standard-setter. We need to lead that debate globally, and I think were behind where the EU is, while Sen. Mike Rounds (R-SD) indicated that he was not as concerned about falling behind the EU on the regulatory front and was more concerned about continuing to facilitate US dominance in developing new innovations like GenAI.

The nature of AI is such that it will take time for Members of Congress to gain a comfort level with its true potential and what, if any, guardrails are needed. As they increase their familiarity and consult with industry and other stakeholders, it is possible that a consensus will occur and some initial regulatory steps will take place beyond merely introducing bills or holding hearings. As AI dominates public discourse, we can expect a ramping-up of legislative activity. Constituents expressing views positive or negative about GenAI when Members are home in their states could also impact the timeline.

The legal and policy framework for regulating AI is going to be a front burner issue for Congress and the Administration for some time to come. It is incumbent upon stakeholders with interest in this issue to develop policy principles and recommendations and to convey them to the Hill and relevant agencies.

Senator Doug Jones, Counsel

It is worth noting that according to a study by OpenSecrets, which tracks money in politics, 123 companies, universities, and trade associations spent a collective $94 million lobbying the federal government on issues involving AI in the first quarter of 2023. Accordingly, companies, trade associations, and nonprofits with a stake in the AI debate and with particular insight to share should be active at this time, focusing on the Members who are most engaged with the issues and on the multiple committees of jurisdiction.

[View source.]

Read more from the original source:
Artificial Intelligence: Recent Congressional Activity and a Look to ... - JD Supra

Read More..

Machine learning and computer vision allow study of animal behavior without markers – Phys.org

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

proofread

With a new markerless method it is now possible to track the gaze and fine-scaled behaviors of every individual bird and how that animal moves in the space with others. A research team from the Cluster of Excellence Center for the Advanced Study of Collective Behavior (CASCB) at the University of Konstanz developed a dataset to advance behavioral research.

Researchers are still puzzling over how animal collectives behave, but recent advances in machine learning and computer vision are revolutionizing the possibilities of studying animal behavior. Complex behaviors, like social learning or collective vigilance can be deciphered with new techniques.

An interdisciplinary research team from the Cluster of Excellence Center for the Advanced Study of Collective Behavior (CASCB) at the University of Konstanz and the Max Planck Institute of Animal Behavior has now succeeded in developing a novel markerless method to track bird postures in 3D just by using video recordings. Credit: University of Konstanz

It is no longer necessary to attach position or movement transmitters to the animals. With this method called 3D-POP (3D posture of pigeons) it is possible to record a group of pigeons and identify the gaze and fine-scaled behaviors of every individual bird and how that animal moves in the space with others. "With the dataset, researchers can study collective behavior of birds by just using at least two video cameras, even in the wild," says Alex Chan, Ph.D. student at the CASCB.

The dataset was released at the Conference on Computer Vision and Pattern Recognition (CVPR) in June 2023 and available via open access so that it can be reused by other researchers. The researchers Hemal Naik and Alex Chan see two potential application areas: Scientists working with pigeons can directly use the dataset. With at least two cameras they can study the behavior of multiple freely moving pigeons. The annotation method can be used with other birds or even other animals so that researchers can soon decipher the behavior of other animals.

More information: 3D-POPAn Automated Annotation Approach to Facilitate Markerless 2D-3D Tracking of Freely Moving Birds With Marker-Based Motion Capture. openaccess.thecvf.com/content/ CVPR_2023_paper.html

Read more:
Machine learning and computer vision allow study of animal behavior without markers - Phys.org

Read More..

Central Texas Poised to Benefit from Artificial Intelligence Boom – Fagen wasanni

As artificial intelligence (AI) tools gain popularity, Central Texas is positioned to be at the forefront of the generative AI boom. Austin, in particular, is considered an early adopter of AI technology. However, its ability to remain competitive in this sector and its dependence on Silicon Valleys AI development activities will determine its success.

According to a report from the Brookings Institute, Austin is still a significant player in the AI ecosystem, but it falls behind San Francisco and San Jose in terms of AI papers, patents, and companies. Despite this ranking, Austins position as a technology hub and home to major tech giants such as Tesla, Dell, and Oracle, contributes to its status as an early adopter.

Generative AI refers to AI tools that utilize algorithms to generate content such as text, audio, code, and images. Prominent examples include OpenAIs ChatGPT and Dall-E. The report raises questions about the future of AI development and whether it will remain concentrated in Silicon Valley or expand to other regions like Austin.

The report suggests that the hyperconcentrated tech geography could lead to limited advancements and imbalances in resources and talent. To counter these potential issues, the report recommends expanding public sector research, investing in AI talent in new places, and encouraging AI development in more regions. Universities are predicted to play a crucial role in driving AI technology work and research contracts.

Among the 15 AI metro areas analyzed in the report, including San Francisco, San Jose, and Austin, two-thirds of the nations AI assets and capabilities are concentrated. The majority of generative AI job postings were found in San Francisco, San Jose, New York, Los Angeles, Boston, and Seattle.

While the Bay Area, particularly San Francisco, leads in terms of generative AI job postings and AI-focused startups, Austin has also seen a significant number of generative AI startups founded between 2018 and 2023.

Overall, Central Texas, with Austin at its core, is well-positioned to benefit from the growing AI industry. However, the decentralization of AI development and the expansion of AI ecosystems beyond Silicon Valley will be pivotal in determining Austins future as an AI leader.

See more here:
Central Texas Poised to Benefit from Artificial Intelligence Boom - Fagen wasanni

Read More..

Human Activity Recognition Using Deep Learning Techniques – Fagen wasanni

Advances in sensor technology have led to a surge of interest in recognizing human activities based on sensor data. This recognition, known as Human Activity Recognition (HAR), has wide-ranging applications in everyday life, such as medical care, movement analysis, intelligent monitoring systems, and smart homes.

HAR can be categorized into two main classes: video-based and sensor-based. Video-based HAR systems rely on cameras to capture videos and images and utilize computer vision technology to identify human actions. However, these systems are susceptible to environmental factors and privacy concerns. In contrast, sensor-based systems use environmental or wearable sensors embedded in smart devices like smartphones and smartwatches to determine human actions.

Wearable sensors present a complex challenge in HAR due to the classification of time-series data with multiple variables. Traditional machine learning algorithms have been successful in categorizing human behaviors, but manual feature extraction requires specialized knowledge, limiting its practicality. Deep learning models, particularly convolutional neural networks (CNN), have revolutionized HAR by automating the feature extraction process.

CNN models have proven effective in extracting features and achieving accuracy in sensor-based HAR. The combination of CNN and recurrent neural networks (RNN) allows for a comprehensive representation of spatial and temporal features. To enhance the effectiveness of HAR, the squeeze-and-excitation (SE) block acts as a channel-attention mechanism to prioritize valuable feature maps while suppressing unreliable ones.

In this study, a novel approach called ResNet-BiGRU-SE is proposed, combining a hybrid CNN with a channel attention system for human activity recognition. Experiments using standard datasets demonstrated that the proposed model outperforms previous deep learning architectures in terms of accuracy.

The utilization of sensor-based HAR holds immense potential in various domains, such as healthcare, sports analysis, surveillance systems, and human-robot interactions. It enables advanced movement tracking systems, automatic interpretation of player actions, user identification in surveillance, and gesture recognition.

Harnessing the power of sensor-based HAR can bring significant advantages and advancements to these diverse sectors. The proposed model presents a promising solution for accurately identifying and predicting human behaviors based on sensor data.

Go here to see the original:
Human Activity Recognition Using Deep Learning Techniques - Fagen wasanni

Read More..

A new dataset of Arctic images will spur artificial intelligence research – MIT News

As the U.S. Coast Guard (USCG) icebreaker Healy takes part in a voyage across the North Pole this summer, it is capturing images of the Arctic to further the study of this rapidly changing region. Lincoln Laboratory researchers installed a camera system aboard the Healy while at port in Seattle before it embarked on a three-month science mission on July 11. The resulting dataset, which will be one of the first of its kind, will be used to develop artificial intelligence tools that can analyze Arctic imagery.

"This dataset not only can help mariners navigate more safely and operate more efficiently, but also help protect our nation by providing critical maritime domain awareness and an improved understanding of how AI analysis can be brought to bear in this challenging and unique environment," says Jo Kurucar, a researcher in Lincoln Laboratory's AI Software Architectures and Algorithms Group, which led this project.

As the planet warms and sea ice melts, Arctic passages are opening up to more traffic, both to military vessels and ships conducting illegal fishing. These movements may pose national security challenges to the United States. The opening Arctic also leaves questions about how its climate, wildlife, and geography are changing.

Today, very few imagery datasets of the Arctic exist to study these changes. Overhead images from satellites or aircraft can only provide limited information about the environment. An outward-looking camera attached to a ship can capture more details of the setting and different angles of objects, such as other ships, in the scene. These types of images can then be used to train AI computer-vision tools, which can help the USCG plan naval missions and automate analysis. According to Kurucar, USCG assets in the Arctic are spread thin and can benefit greatly from AI tools, which can act as a force multiplier.

The Healy is the USCG's largest and most technologically advanced icebreaker. Given its current mission, it was a fitting candidate to be equipped with a new sensor to gather this dataset. The laboratory research team collaborated with the USCG Research and Development Center to determine the sensor requirements. Together, they developed the Cold Region Imaging and Surveillance Platform (CRISP).

"Lincoln Laboratory has an excellent relationship with the Coast Guard, especially with the Research and Development Center. Over a decade, weve established ties that enabled the deployment of the CRISP system," says Amna Greaves, the CRISP project lead and an assistant leader in the AI Software Architectures and Algorithms Group. "We have strong ties not only because of the USCG veterans working at the laboratory and in our group, but also because our technology missions are complementary. Today it was deploying infrared sensing in the Arctic; tomorrow it could be operating quadruped robot dogs on a fast-response cutter."

The CRISP system comprises a long-wave infrared camera, manufactured by Teledyne FLIR (for forward-looking infrared), that is designed for harsh maritime environments. The camera can stabilize itself during rough seas and image in complete darkness, fog, and glare. It is paired with a GPS-enabled time-synchronized clock and a network video recorder to record both video and still imagery along with GPS-positional data.

The camera is mounted at the front of the ship's fly bridge, and the electronics are housed in a ruggedized rack on the bridge. The system can be operated manually from the bridge or be placed into an autonomous surveillance mode, in which it slowly pans back and forth, recording 15 minutes of video every three hours and a still image once every 15 seconds.

"The installation of the equipment was a unique and fun experience. As with any good project, our expectations going into the install did not meet reality," says Michael Emily, the project's IT systems administrator who traveled to Seattle for the install. Working with the ship's crew, the laboratory team had to quickly adjust their route for running cables from the camera to the observation station after they discovered that the expected access points weren't in fact accessible. "We had 100-foot cables made for this project just in case of this type of scenario, which was a good thing because we only had a few inches to spare," Emily says.

The CRISP project team plans to publicly release the dataset, anticipated to be about 4 terabytes in size, once the USCG science mission concludes in the fall.

The goal in releasing the dataset is to enable the wider research community to develop better tools for those operating in the Arctic, especially as this region becomes more navigable. "Collecting and publishing the data allows for faster and greater progress than what we could accomplish on our own," Kurucar adds. "It also enables the laboratory to engage in more advanced AI applications while others make more incremental advances using the dataset."

On top of providing the dataset, the laboratory team plans to provide a baseline object-detection model, from which others can make progress on their own models. More advanced AI applications planned for development are classifiers for specific objects in the scene and the ability to identify and track objects across images.

Beyond assisting with USCG missions, this project could create an influential dataset for researchers looking to apply AI to data from the Arctic to help combat climate change, says Paul Metzger, who leads the AI Software Architectures and Algorithms Group.

Metzger adds that the group was honored to be a part of this project and is excited to see the advances that come from applying AI to novel challenges facing the United States: Im extremely proud of how our group applies AI to the highest-priority challenges in our nation, from predicting outbreaks of Covid-19 and assisting the U.S. European Command in their support of Ukraine to now employing AI in the Arctic for maritime awareness."

Once the dataset is available, it will be free to download on the Lincoln Laboratory datasetwebsite.

The rest is here:
A new dataset of Arctic images will spur artificial intelligence research - MIT News

Read More..

The Role of AI and Machine Learning in Optimizing Irrigation Emitters – EnergyPortal.eu

Exploring the Impact of AI and Machine Learning on the Optimization of Irrigation Emitters

The advent of Artificial Intelligence (AI) and Machine Learning (ML) has ushered in a new era of technological advancements, revolutionizing various sectors, including agriculture. In particular, these technologies are playing a significant role in optimizing irrigation emitters, thereby improving water efficiency and crop yield.

Irrigation emitters, the components of an irrigation system that distribute water to the plants, are critical to the success of agricultural endeavors. Traditionally, the optimization of these emitters has been a manual and time-consuming process, often leading to water wastage and sub-optimal crop yield. However, with the integration of AI and ML, this scenario is rapidly changing.

AI and ML algorithms can analyze vast amounts of data from various sources, such as weather forecasts, soil moisture sensors, and crop health indicators. This data analysis allows the system to make informed decisions about when and how much to irrigate, minimizing water waste and maximizing crop yield. For instance, if the system detects an upcoming rainfall, it can reduce or even stop irrigation, saving significant amounts of water.

Moreover, these technologies can also predict future irrigation needs based on historical data and current conditions. This predictive capability enables farmers to plan their irrigation schedules more effectively, further enhancing water efficiency. Additionally, AI and ML can identify patterns and trends that may not be apparent to the human eye, providing valuable insights for improving irrigation strategies.

The application of AI and ML in optimizing irrigation emitters also contributes to sustainability. Agriculture is one of the largest consumers of freshwater globally, and efficient irrigation is key to reducing water usage. By optimizing irrigation emitters, AI and ML can significantly reduce water consumption, contributing to the conservation of this precious resource.

Furthermore, these technologies can also help in mitigating the effects of climate change on agriculture. As weather patterns become increasingly unpredictable, the ability to adapt irrigation strategies in real-time becomes crucial. AI and ML, with their predictive and adaptive capabilities, can help farmers navigate these challenges, ensuring the continued productivity of their farms.

However, the implementation of AI and ML in optimizing irrigation emitters is not without challenges. The accuracy of these systems depends on the quality and quantity of data available. Therefore, there is a need for robust data collection and management systems to support these technologies. Additionally, there is a need for ongoing research and development to further refine these technologies and make them more accessible to farmers worldwide.

In conclusion, AI and ML are playing a pivotal role in optimizing irrigation emitters, improving water efficiency, and enhancing crop yield. These technologies are not only transforming agriculture but also contributing to sustainability and climate change mitigation. As we continue to explore and harness the potential of AI and ML, we can look forward to a future where agriculture is more efficient, sustainable, and resilient.

See more here:
The Role of AI and Machine Learning in Optimizing Irrigation Emitters - EnergyPortal.eu

Read More..