Page 2,193«..1020..2,1922,1932,1942,195..2,2002,210..»

Ransomware Will Grind You Down Without Proper Precautions, FBI Tells Local Governments – MSSP Alert

by D. Howard Kass Apr 4, 2022

The FBI has warned local governments to expect that ransomware attacks on agencies will have significant repercussions by straining financial and operational resources and disrupting a multitude of services.

In a newly issued Private Industry Notification (PIN), the federal law enforcement arm said previous ransomware instances levied on local government agencies had impacted public and health services, emergency and safety operations, and compromised personal data.

The severity and frequency of and damage from attacks are not likely to ebb, the FBI said. In the next year, local U.S. government agencies almost certainly will continue to experience ransomware attacks, particularly as malware deployment and targeting tactics evolve, further endangering public health and safety, and resulting in significant financial liabilities, the law enforcement agency said.

With the cybersecurity communitys attention drawn largely to the war in Ukraine and the resulting threats of global cyber warfare, the FBIs missive serves as a reminder to the public/private sector and to managed security service providers of ransomwares potential for destruction.

In compiling information for the PIN, the FBI appeared to rely heavily on a 2021 study conducted in the U.K. on the state of ransomware in 30 countries. Heres some top line data from that report:

The report also provides some details on four ransomware attacks dating to January, 2021, spaced roughly four months apart, that hit the local government sector:

Ransomware attacks against local government entities and the subsequent impacts are especially significant due to the publics dependency on critical utilities, emergency services, educational facilities, and other services overseen by local governments, making them attractive targets for cyber criminals, the alert said.

The FBI has compiled a list of recommended best practices to fend off ransomwares capabilities:

To limit an adversarys ability to learn an organizations enterprise environment and to move laterally, take the following actions:

Continue reading here:
Ransomware Will Grind You Down Without Proper Precautions, FBI Tells Local Governments - MSSP Alert

Read More..

Partner Programs Are The Glue That Bond IT Vendors And The Channel – CRN

Some of the IT industrys biggest vendors have launched significant revamps and expansions of their partner programs in recent months including cloud platform giant Amazon Web Services, chipmaker AMD, computer systems manufacturers Dell Technologies and Lenovo, and software and cloud behemoth Microsoft.

While those and other recent launches of new and updated IT vendor partner programs differ in the details, there are several common themes among the announcements: simplifying partner engagement, supporting evolving partner types, providing partners with training and certification in new technologies, helping channel partners adapt to new customer buying practices like as-a-service, and creating new ways for partners to increase recurring revenue.

Dell, for example, recently launched a new partner program for its fiscal 2023 that brings together solution providers, cloud service providers and OEMs; offers new incentives aimed at driving partner profitability in midrange storage, client systems and VMware solutions; seeks to drive as-a-service sales, streamlines the Partner Experience Center and Online Systems Configurator, and invests more in the channel than in fiscal 2022.

We have a robust reseller ecosystem operating in a traditional capex model who have impeccable momentum right now. We have emerging and evolving partners co-engineering around OEM, edge and telecom. These partners have positioned us for the future, Dell Technologies global channel chief Rola Dagher told CRN in February. We have partners who have been providing managed services with Dell infrastructure long before as-a-service was cool. And those are the partners were seeing providing so much more value than just shift-and-lift.

Data integration and analytics platform developer Qlik devoted 2021 to re-aligning its channel operations and updating its partner program to support the companys shift to a software-as-a-service model.

2021 was about enabling partners to build profitable and thriving recurring revenue businesses on our cloud platform, and Im proud of our successes in that area, Poornima Ramaswamy, Qlik executive vice president of global partnerships, told CRN. We continued to build momentum behind our SaaS-first focus to completely modernize our partner program, which now includes a dedicated Cloud Services track. This is helping channel partners move beyond the traditional reseller model to an expanded emphasis on a recurring revenue, customer lifecycle-based co-sell model that matches how customers want to consume through the cloud.

AWS unveiled a major overhaul of its partner program in November including introducing five partner paths to streamline the program, simplify engagement and expand partner access to benefits. AMD introduced in February a new partner program with aggressive rebates for commercial systems sellers. And last year Cisco Systems launched the biggest changes to its partner program structure in more than a decade with an increased focus on lifecycle, managed services and everything-as-a-service opportunities.

Some IT companies have spent the last year implementing partner program changes and reaping the benefits of those changes. Juniper Networks saw partner-initiated business grow 110 percent in the year since November 2020 when the company unveiled a plan at its Partner Summit to invest heavily in the channel.

We went on a journey with the partners, channel chief Gordon Mackintosh told CRN in November of Junipers channel efforts in 2021. We invested in them and got staggering results. Really, what the partners have been asking for is more investment, joint investments, and alignment with Juniper to work more closely with us than ever before.

At OpsRamp, a developer of IT operations management tools, channel chief Paul Brodie oversaw a major revamp of the companys channel program in 2021. We had a partner program, but it was kind of table stakes and we needed to do a refresh, said Brodie, who became vice president of global channel sales earlier in the year.

San Jose, Calif.-based OpsRamp, which sells 100 percent through the channel, updated its partner program to offer what the company described as a more partner-friendly profit-sharing model, along with enhanced lead sharing, more comprehensive sales assistance, and other resources.

Its not just the major vendors that have recently undertaken partner program initiatives. Multi-cloud networking startup Prosimo launched its inaugural partner program in January while cloud security company Netskope did likewise in March. IT infrastructure automation company Puppet debuted a new competency-focused partner program in January with the goal of empowering its 200 global partners to be more competitive.

IT vendors rely on partners to expand their market reach while solution providers rely on vendors for the IT products and related services their customers are asking for. Partner Programs are the glue that hold those relationships together.

The annual CRN Partner Program Guide is based on detailed applications submitted by IT companies more than 300 this year outlining all aspects of their channel programs. The complete Partner Program list can be found here.

The Channel Companys research team analyzes the applications and designates some of the programs as 5-Star, an elite subset of partner programs with overall exemplary ratings, providing the incentives, resources and assistance that channel partners need to be successful in todays competitive marketplace.

The 5-Star criteria include partner incentives, margins and discounts, partner profitability, sales and marketing assistance, and subscription-based and consumption-based pricing availability. The criteria also include the availability of sales leads, deal registration, pre- and post-sales support, programs to help partners grow their services attach, training and education offerings, specialization and technical certifications.

Snapshots of the 5-Star designees can be found in a series of slide shows, organized according to product/technology category, offering insights about each companys partner program and, where provided, their goals for 2022.

The slide shows include companies that provide products and services for systems and data centers, cloud platforms and infrastructure, networking and unified communications, software, security, devices and peripherals, and data storage and backup. While many companies provide products and services that span multiple technologies, weve assigned each vendor to the slide show for the technology category in which they are most prominent.

CRN also has slide shows of 5-Star partner programs from distributors and from emerging vendors startup companies founded in 2016 or more recently.

Continued here:
Partner Programs Are The Glue That Bond IT Vendors And The Channel - CRN

Read More..

What Is Artificial Intelligence? – ExtremeTech

To many, AI is just a horrible Steven Spielberg movie. To others, its the next generation of learning computers. But what is artificial intelligence, exactly? The answer depends on who you ask. Broadly, artificial intelligence (AI) is the combination of computer science and robust datasets, deployed to solve some kind of problem.

Many definitions of artificial intelligence include a comparison to the human mind or brain, whether in form or function. Alan Turing wrote in 1950 about thinking machines that could respond to a problem using human-like reasoning. His eponymous Turing test is still a benchmark for natural language processing. Later, Stuart Russell and John Norvig observed that humans are intelligent, but were not always rational. Russell and Norvig saw two classes of artificial intelligence: systems that think and act like a human being, versus those that think and act rationally. Today, weve got all kinds of programs we call AI.

Many AIs employ neural nets, whose code is written to emulate some aspect of the architecture of neurons or the brain. However, not all intelligence is human-like. Nor is it necessarily the best idea to emulate neurobiological information processing. Thats why engineers limit how far they carry the brain metaphor. Its more about how phenomenally parallel the brain is, and its distributed memory handling. As defined by John McCarthy in 2004, artificial intelligence is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.

Moreover, the distinction between a neural net and an AI is often a matter of philosophy, more than capabilities or design. Many AI-powered systems are neural nets, under the hood. We also call some neural nets AIs. For example, OpenAIs powerful GPT-3 AI is a type of neural net called a transformer (more on these below). A robust neural nets performance can equal or outclass a narrow AI. There is much overlap between neural nets and artificial intelligence, but the capacity for machine learning can be the dividing line.

Conceptually: In the sense of its logical structure, to be an AI, you need three fundamental parts. First, theres the decision process usually an equation, a model, or just some code. AIs often perform classification or apply transformations. To do that, the AI must be able to decide on patterns in the data. Second, theres an error function some way for the AI to check its work. And third, if the AI is going to learn from experience, it needs some way to optimize its model. Many neural networks do this with a system of weighted nodes, where each node has both a value and a relationship to its network neighbors. Values change over time; stronger relationships have a higher weight in the error function.

Deep learning networks have more hidden layers than conventional neural networks. Circles are nodes, or neurons.

Physically: Typically, an AI is just software. AI-powered software services like Grammarly and Rytr use neural nets, like GPT-3. Those neural nets consist of equations or commands, written in things like Python or Common Lisp. They run comparisons, perform transformations, and suss out patterns from the data. They run on server-side hardware, usually, but which hardware isnt important. Any conventional silicon will do, be it CPU or GPU. However, there are dedicated hardware neural nets, a special kind of ASIC called neuromorphic chips.

Not all ASICs are neuromorphic designs. However, neuromorphic chips are all ASICs. Neuromorphic design is fundamentally different from CPUs, and only nominally overlaps with a GPUs multi-core architecture. But its not some exotic new transistor type, nor any strange and eldritch kind of data structure. Its all about tensors. Tensors describe the relationships between things; theyre a kind of mathematical object that can have metadata, just like a digital photo has EXIF data.

Modern Nvidia RTX GPUs have a huge number of tensor cores. That makes sense if youre drawing moving polygons, each with some number of properties or effects that apply to it. But tensors can handle more than just spatial data. The ability to parallelize tensor calculations is also why GPUs get scalped for crypto mining, and why theyre used in cluster computing, especially for deep learning. GPUs excel at organizing many different threads at once.

But no matter how elegant your data organization might be, it still has to filter through multiple layers of software abstraction before it ever becomes binary. Intels neuromorphic chip, Loihi 2, affords a very different approach.

Loihi 2 is a neuromorphic chip that comes as a package deal with a software ecosystem named Lava. Loihis physical architecture invites almost requires the use of weighting and an error function, both defining features of AI and neural nets. The chips biomimetic design extends to its electrical signaling. Instead of ones and zeroes, on or off, Loihi fires in spikes with an integer value capable of carrying much more data. It begs to be used with tensors. What if you didnt have to translate your values into machine code and then binary? What if you could just encode them directly?

Machine learning models that use Lava can take full advantage of Loihi 2s unique physical design. Together, they offer a hybrid hardware-software neural net that can process relationships between multiple entire multi-dimensional datasets, like an acrobat spinning plates.

AI tools like Rytr, Grammarly and others do their work in a regular desktop browser. In contrast, neuromorphic chips like Loihi arent designed for use in consumer systems. (At least, not yet.) Theyre intended for researchers. Instead, neuromorphic engineering has a different strength. It can allow silicon to perform another kind of biomimicry. Brains are extremely cheap, in terms of power use per unit throughput. The hope is that Loihi and other neuromorphic systems can mimic that power efficiency to break out of the Iron Triangle and deliver all three: good, fast, and cheap.

If the three-part logical structure of an AI sounds familiar, thats because neural nets have the same three logical pillars. In fact, from IBMs perspective, the relationship between machine learning, deep learning, neural networks and artificial intelligence is a hierarchy of evolution. Its just like the relationship between Charmander, Charmeleon and Charizard. Theyre all separate entities in their own right, but each is based on the one before, and they grow in power as they evolve. We still have Charmanders even though we also have Charizards.

Artificial intelligence as it relates to machine learning, neural networks, and deep learning. Image: IBM

When an AI learns, its different than just saving a file after making edits. To an AI, learning involves changing its process.

Many neural nets learn through a process called back-propagation. Typically, a neural net is a feed-forward process, because data only moves in one direction through the network. Its efficient, but its also a kind of ballistic (unguided) process. In back-propagation, however, later nodes in the process get to pass information back to earlier nodes. Not all neural nets perform back-propagation, but for those that do, the effect is like changing the coefficients in front of the variables in an equation.

We also divide neural nets into two classes, depending on what type of problems they can solve. In supervised learning, a neural net checks its work against a labeled training set or an overwatch; in most cases, that overwatch is a human. For example, SwiftKey learns how you text, and adjusts its autocorrect to match. Pandora uses listeners input to finely classify music, in order to build specifically tailored playlists. 3blue1brown even has an excellent explainer series on neural nets, where he discusses a neural net using supervised learning to perform handwriting recognition.

Supervised learning is great for fine accuracy on an unchanging set of parameters, like alphabets. Unsupervised learning, however, can wrangle data with changing numbers of dimensions. (An equation with x, y and z terms is a three-dimensional equation.) Unsupervised learning tends to win with small datasets. Its also good at recognizing patterns we might not even know to look for.

Transformers are a special, versatile kind of AI capable of unsupervised learning. They can integrate many different streams of data, each with its own changing parameters. Because of this, theyre great at handling tensors. Tensors, in turn, are great for keeping all that data organized. With the combined powers of tensors and transformers, we can handle more complex datasets.

Video upscaling and motion smoothing are great applications for AI transformers. Likewise, tensors are crucial to the detection of deepfakes and alterations. With deepfake tools reproducing in the wild, its a digital arms race.

The person in this image does not exist. This is a deepfake image created by StyleGAN, Nvidias generative adversarial neural network.

Video signal has high dimensionality. Its made of a series of images, which are themselves composed of a series of coordinates and color values. Mathematically and in computer code, we represent those quantities as matrices or n-dimensional arrays. Helpfully, tensors are great for matrix and array wrangling. DaVinci Resolve, for example, uses tensor processing in its (NVidia RTX) hardware-accelerated Neural Engine facial recognition utility. Hand those tensors to a transformer, and its powers of unsupervised learning do a great job picking out the curves of motion on-screen and in real life.

In fact, that ability to track multiple curves against one another is why the tensor-transformer dream team has taken so well to things like natural language processing. And the approach can generalize. Convolutional transformers a hybrid of a CNN and a transformer excel at image recognition on the fly. This tech is in use today, for things like robot search and rescue or assistive image and text recognition, as well as the much more controversial practice of dragnet facial recognition, la Hong Kong.

The ability to handle a changing mass of data is great for consumer and assistive tech, but its also clutch for things like mapping the genome, and improving drug design. The list goes on. Transformers can also handle different kinds of dimensions, not just the spatial, which is useful for managing an array of devices or embedded sensors like weather tracking, traffic routing, or industrial control systems. Thats what makes AI so useful for data processing at the edge.

Not only does everyone have a cell phone, there are embedded systems in everything. This proliferation of devices gives rise to an ad hoc global network called the Internet of Things (IoT). In the parlance of embedded systems, the edge represents the outermost fringe of end nodes within the collective IoT network. Edge intelligence takes two main forms: AI on edge and AI for edge. The distinction is where the processing happens. AI on edge refers to network end nodes (everything from consumer devices to cars and industrial control systems) that employ AI to crunch data locally. AI for the edge enables edge intelligence by offloading some of the compute demand to the cloud.

In practice, the main differences between the two are latency and horsepower. Local processing is always going to be faster than a data pipeline beholden to ping times. The tradeoff is the computing power available server-side.

Embedded systems, consumer devices, industrial control systems, and other end nodes in the IoT all add up to a monumental volume of information that needs processing. Some phone home, some have to process data in near real-time, and some have to check and correct their own work on the fly. Operating in the wild, these physical systems act just like the nodes in a neural net. Their collective throughput is so complex that in a sense, the IoT has become the AIoT the artificial intelligence of things.

As devices get cheaper, even the tiny slips of silicon that run low-end embedded systems have surprising computing power. But having a computer in a thing doesnt necessarily make it smarter. Everythings got Wi-Fi or Bluetooth now. Some of it is really cool. Some of it is made of bees. If I forget to leave the door open on my front-loading washing machine, I can tell it to run a cleaning cycle from my phone. But the IoT is already a well-known security nightmare. Parasitic global botnets exist that live in consumer routers. Hardware failures can cascade, like the Great Northeast Blackout of summer 2003, or when Texas froze solid in 2021. We also live in a timeline where a faulty firmware update can brick your shoes.

Theres a common pipeline (hypeline?) in tech innovation. When some Silicon Valley startup invents a widget, it goes from idea to hype train to widgets-as-a-service to disappointment, before finally figuring out what the widgets actually good for.

Oh, okay, there is an actual hypeline. Above: The 2018 Gartner hype cycle. Note how many forms of artificial intelligence showed up on this roller coaster then and where they are now. Image: Gartner, 2018

This is why we lampoon the IoT with loving names like the Internet of Shitty Things and the Internet of Stings. (Internet of Stings devices communicate over TCBee-IP.) But the AIoT isnt something anyone can sell. Its more than the sum of its parts. The AIoT is a set of emergent properties that we have to manage if were going to avoid an explosion of splinternets, and keep the world operating in real time.

In practice, artificial intelligence is often the same thing as a neural net capable of machine learning. Theyre both software that can run on whatever CPU or GPU is available and powerful enough. Neural nets often have the power to perform machine learning via back-propagation. Theres also a kind of hybrid hardware-and-software neural net that brings a new meaning to machine learning. Its made using tensors, ASICs, and neuromorphic engineering by Intel. Furthermore, the emergent collective intelligence of the IoT has created a demand for AI on, and for, the edge. Hopefully we can do it justice.

Go here to see the original:
What Is Artificial Intelligence? - ExtremeTech

Read More..

Stanford center uses AI and machine learning to expand data on women’s and children’s health, director says – The Stanford Daily

Stanfords Center for Artificial Intelligence in Medicine and Imaging (AIMI) is increasing engagement around the use of artificial intelligence (AI) and machine learning to build a better understanding of data on womens and childrens health, according to AIMI Director and radiology professor Curt Langlotz.

Langlotz explained that, while AIMI initially focused on applying AI to medical imaging, it has since expanded its focus to applications of AI for other types of data, such as electronic health records.

Specifically, the center conducts interdisciplinary machine learning research that optimizes how data of all forms are used to promote health, Langlotz said during a Monday event hosted by the Maternal and Child Health Research Institute (MCHRI). And that interdisciplinary flavor is in our DNA.

The center now has over 140 affiliated faculty across 20 departments, primarily housed in the engineering department and the school of medicine at Stanford, according to Langlotz.

AIMI has four main pillars: building an infrastructure for data science research, facilitating interdisciplinary collaborations, engaging the community and providing funding.

The center provides funding predominantly through a series of grant programs. Langlotz noted that the center awarded seven $75,000 grants in 2019 to fund mostly imaging projects, but it has since diversified funding to go toward projects investigating other forms of data, such as electronic health records. AIMI also collaborated with the Human-Centered Institute for Artificial Intelligence (HAI) in 2021 to give out six $200,000 grants, he added.

Outside of funding, AIMI hosts a virtual symposium on technology and health annually and has a health-policy committee that informs policymakers on the intersection between AI and healthcare. Furthermore, the center pairs industry partners with laboratories to work on larger research projects of mutual interest as part of the only industry affiliate program for the school of medicine, Langlotz added.

Industry often has expertise that we dont, so they may have expertise on bringing products to markets as they may know what customers are looking for, Langlotz said. And if were building these kinds of algorithms, we really would like them to ultimately reach patients.

Heike Daldrup-Link, a professor of radiology and pediatrics, and Alison Callahan, a research scientist at the Center for Biomedical Informatics, shared their research funded by the AIMI Center that rests at the intersection of computer science and medicine.

Daldrup-Links research involves analyzing childrens responses to lymphoma cancer therapy with a model that examines tumor sites using positron emission tomography (PET) scans. These scans reveal the metabolic processes occurring within tissues and organs, according toDaldrup-Link. The scans also serve as a good source to build algorithms because there are at least 270,000 scans per year from lymphoma patients, resulting in a large amount of available data.

Callahan is building AI models to extract information from electronic health records to learn more about pregnancy and postnatal health outcomes. She explained that much of the health data available from records is currently unstructured, meaning it does not conform to a database or simple model. Still, AI methods can really shine in extracting valuable information from unstructured content like clinical texts or notes, she said.

Callahan and Daldrup-Link are just two examples of researchers who use AI and machine learning methods to produce novel research on womens and childrens health. Developing new methods such as these are important in solving complex problems related to the field of healthcare, according to Langlotz.

If youre working on difficult and interesting applied problems that are clinically important, youre likely to encounter the need to develop new and interesting methods, Langlotz said. And thats proven true for us.

Read the original post:
Stanford center uses AI and machine learning to expand data on women's and children's health, director says - The Stanford Daily

Read More..

Policy experts stress the need to regulate artificial intelligence in health care – Urology Times

Not having policies in place to regulate artificial intelligence (AI) and machine learning (ML) could have dire consequences across every sector of the health care industry.

That was the point made by Brian Scarpelli and Sebastian Holst during their presentation titled A modest proposal for AI regulation in healthcare, held during the HIMSS22 Global Health Conference in Orlando. Scarpelli is the senior global policy counsel for the Connected Health Initiative and Holst is principal with Qi-fense, a consulting group that works in AI and ML.

ML properties do more than challenge domain-specific applications of technology, Scarpelli and Holst write. Many of these properties will force an evaluation and retooling of core manufacturing, quality, and risk frameworks that have effectively served as the foundation of todays industry-specific regulations and policies.

Heres some key points from their presentation on the growth of AI/ML and the need for regulation.

AI can potentially revolutionize health care in all facets. It can reduce administrative burdens for providers and payer and allow for resources to be deployed within a health system to serve vulnerable patient populations. It can manage public health emergencies such as the COVID-19 pandemic and help improve both preventive care and diagnostic efficiency.

According to Scarpelli and Holst, the growth in machine learning products has surged since 2015, starting first with processing applications, including products for processing radiological images, and has since progressed into diagnosis applications, particularly also in the radiological space to assist with triage and prioritization.

The number of patents coded to machine learning and health informatics has exploded, from 165 in 2017 to more than 1,100 in 2021.

While AI is promising, there are potential legal and ethical challenges that must be addressed. For example, one of the major themes of the HIMSS22 conference has been the challenge of achieving health equity and eliminating implicit bias. Thats one of the major challenges of AI as well since AI solutions can be biased. Many sessions focused on how diverse teams are needed when creating AI solutions to ensure that the programs dont carry the same biases as society, which could exacerbate current social problems, according to Tania M. Martin-Mercado, MS, MPH, a clinical researcher who presented on How implicit bias affects AI in healthcare.

During her presentation, she pointed to an example of an online tool that estimates risk of breast cancer calculates a lower risk for Black or Latinx women than White even when every other risk factor is identical.

A diverse group of health agencies, including the FDA, HHS, CMS, FTC, and the World Health Organization, are developing regulations and asking from guidance from various stakeholders, including AI developers, physicians and other providers, patients, medical societies, and academic institutions.

Scarpelli says that the vision for successful AI follows four principals. It should:

This article originally appeared on the website MedicalEconomics.com

Here is the original post:
Policy experts stress the need to regulate artificial intelligence in health care - Urology Times

Read More..

Artificial Intelligence: A game-changer for the Indian Education System – The Financial Express

With the rapid advancement of technology, Artificial Intelligence (AI) has become one of the key aspects of growth and innovation across industries. It is thus imperative that the youth is made familiar with the basic concepts of AI from their childhood. In fact, it looks like the process has already started. Madhya Pradesh government had recently announced the introduction of an Artificial Intelligence course for students from class 8. Chief Minister Shivraj Singh Chouhan had said that this is going to be the first such initiative in the country.

India has always advocated for universal learning, and Artificial Intelligence constitutes an integral part of that. It is important for educators across states in India to start integrating the topic of AI into their classrooms as it can definitely help the education system achieve the impossible.

Lets dive into the various advantages of introducing Artificial Intelligence in the Indian education system:

According to a UNESCO report released in 2021, there are about 1.2 lakh single-teacher schools in the country, of which 89 percent are in rural areas. The report suggests that India needs around 11.16 lakh additional teachers to meet this shortfall. AI can help overcome this shortage and can provide easy access to education for one and all.

For professors and teachers to focus on every individual students needs and requirements is difficult, and it is going to get tougher with the rapidly growing population. This problem can be resolved if our education system resorts to implementing AI programs in classrooms which will not only help in assessing every students learning graph but also help them navigate through their weaknesses.

Artificial Intelligence can help teachers with administrative work like creating feedback for students, grading papers, arranging parent-teacher interactions etc. AI applications like text-to-speech can help teachers save time as usual on a daily basis, This will not only save their time but also make room for the teachers to focus more on the creative aspect of teaching.

AI programs like chatbots can also do the job of assisting students in answering and resolving their queries any time any place. They wont have to wait to see their teachers to get the answers, they can easily march ahead of time by simply with a click of a button.

In todays day and age, it is important to optimize the process of learning for each and every child. There are a number of possibilities to what AI could do if introduced as an integral part of the education system. It is up to us to make the most of it.

Follow this link:
Artificial Intelligence: A game-changer for the Indian Education System - The Financial Express

Read More..

UNSW researcher receives award recognising women in artificial intelligence – UNSW Newsroom

UNSW Engineering Professor Flora Salim has been honoured for her pioneering work in computing and machine learning by Women in AI, aglobal advocacy group for women in the artificial intelligence (AI) field.

The 2022 Women in AI Awards Australia and New Zealand recognised women across various industries committed to excellence in AI.

Finalists were judged on innovation, leadership and inspiring potential, global impact, and the ability of the AI solution to provide a social good for the community.

Prof. Salim was recognised for her AI achievements in the Defence and Intelligence award category.

The award acknowledged her research in the cross-cutting areas of ubiquitous computing and machine learning, with a focus on efficient, fair, and explainable machine learning for multi-dimensional sensor data, towards enabling situational and behaviour intelligence for multiple applications.

I am thrilled and honoured to receive this award. This highlights our efforts into advancing AI and machine learning techniques for sensor data, Prof. Salim said.

I would like to acknowledge my students, postdocs, collaborators, and mentors. I hope we can inspire more women to join us towards solving difficult AI problems that matter.

Prof. Salim is the inaugural Cisco Chair in Digital Transport in the School of Computer Science and Engineering at UNSW Sydneyand a member of the Australian Research Council (ARC) College of Experts, having recently moved fromRMIT Universitys School of Computing Technologies, Melbourne.

Her research on human-centred computing AI and machine learning for behaviour modelling with multimodal spatial-temporal data has received funding from numerous partners, resulting in more than 150 papers and three patents.

Research led by Prof. Salim with collaborators from Microsoft Research and RMIT University on task characterisation and automating task scheduling led to insights that influenced the research and development of several new Microsoft product features.

UNSW Dean of Engineering, Professor Stephen Foster congratulated Prof. Salim on receiving an award that promotes women in the AI sector.

Artificial intelligence will reshape every corner of our lives in the coming years, so its pleasing to see brilliant women recognised for shaping the future of AI, Prof. Foster said.

I congratulate Prof. Salim for being on the forefront of AI today.

Women in AI is a global not-for-profit network working towards empowering women and minorities to excel and be innovators in the AI and Data fields.

The awards were held at a gala dinner at the National Gallery of Victoria in Melbourne.

Visit link:
UNSW researcher receives award recognising women in artificial intelligence - UNSW Newsroom

Read More..

How Artificial Intelligence Affects the VPN – Daily Bayonet

Artificial Intelligence helps machines make intelligent decisions. The concern of Artificial Intelligence is to create smarter devices and systems. It helps us process information faster and makes our technology more user-friendly. Artificial Intelligence works by using algorithms to analyze data and find patterns. This allows us to predict outcomes and make better decisions.

To continue this article, we need to answer the next questions: What is VPN, and how does it work? VPNs are secure and encrypted connections that allow you to connect to a remote network. VPNs can be used to access resources such as files, applications, and printers on the remote network. VPNs are also useful for connecting to services when traveling and protecting your privacy when using public Wi-Fi.

How does VPN work? VPNs work by creating a secure connection between your computer and the VPN server. The VPN server is a computer that is located in a different location. All traffic between your computer and the VPN server is encrypted, so your personal information is protected. If you want to learn more about VPNs, you can visit VeePNs website.

Peoples lives have been made easier by artificial Intelligence (AI). The complexity of AI technology has rendered previously complicated challenges obsolete. However, its greatest days are still to come since most of its potential remains unexplored.

Despite the numerous advantages that Artificial Intelligence has provided, there are still drawbacks. At both the professional and personal levels, AI poses security and privacy violations. The encryption technology of VPN in the UAE and other countries might help address these security and privacy violations. On the other hand, the concerns of Artificial Intelligence are higher, and both individuals and corporations must seek a better approach.

Businesses are still dealing with the effects of the Covid-19 pandemics slowdown. In the present climate, all eyes are on AI and VPNs. Both technologies have a lot of room for improvement. Both are expected to evolve.

What is next for these technologies when they evolve further? Continue reading to discover the solution.

Reality Censorship has both positive and harmful aspects. It is based on the concerns of Artificial Intelligence (AI), which can recognize the content and gather data.

Some governments and organizations use it to limit or prohibit access to particular websites. Security and privacy violations, pornographic material, or possibly harmful software might all be factors.

Artificial Intelligence can potentially be abused. These behaviors demonstrate how AI technology can be abused to get an advantage, whether concealing facts or illegally gathering data. The Great Firewall of China is an example of this.

Despite the factors mentioned above, censorship exists today. According to present patterns, it is expected to continue in the future. As long as they are subjected to censorship, users will continue to look for ways to avoid geo-restrictions by using anonymous techniques.

Proxies and virtual private networks (VPNs) are effective choices. Some individuals confuse people by using virtual IP addresses, but knowledgeable users know the differences.

The current needs will drive future developments in the concerns of Artificial Intelligence (AI) technology and Virtual Private Networks (VPN). Because of the significant expenditures made, these technologies will encourage all stakeholders, including corporations, to seek inclusive solutions.

Next-generation VPNs will be the future of VPN technology. These VPNs will mix virtual private network technologies and design. Thanks to cloud technologies, advanced zero trust architecture will be enabled.

New and better capabilities will be accessible in future versions that were not present in previous VPN versions. These features might be entirely new or improvements to existing features, as indicated below.

IP Tunneling: Encryption only covered the data within an IP packet in the past, and it still does today. A VPNs encryption technology will encompass all IP packets in the future.

Faster configuration: Future VPN versions will be simpler and faster to set up than current ones.

Fingerprinting: With this next-generation VPN functionality, customers will be able to identify their activity and data across virtual private networks.

Traffic concealment: VPNs are now used to hide communications that can be easily discovered. As a result, certain VPNs have been blacklisted. Audio and video streaming services are well-known to be restricted. The next-generation VPNs will be difficult to detect by trackers.

These and many more factors will contribute to the future VPN markets complexity with next-generation VPNs. If you are passionate about the changes stated before, you may expect to like them.

Virtual private networks (VPNs) are also benefiting from AI. VPNs for all devices can combat Artificial Intelligence privacy violations using machine learning. According to Smart Data Collective, machine learning algorithms allow VPNs to be more successful in protecting customers from online risks.

Machine learning and virtual private networks work together to take business data security to the next level. Theyre even employed in digital marketing and eCommerce to ensure site security. According to Analytics Insight, machine learning is responsible for helping VPNs attain 90 percent accuracy, according to research published in the Journal of Cyber Security Technology.

AI-based routing is one of the advancements brought about by the concerns of artificial Intelligence. AI-based routing allows internet users to be routed to a VPN server based on their location, connecting to the VPN server closest to their destination.

A user connecting to a website hosted in Japan will most likely receive a Tokyo location as his exit server. Such progress, made possible by Artificial Intelligence, improves ping response while ensuring that VPN traffic stays within the network, making tracking the user considerably more difficult.

VPN also makes home connections more secure, which is important given the rise in online crimes. According to Smart Data Collective, VPN suites leverage artificial intelligence capabilities to offer a whole new degree of security.

AI provides several advantages across the board. It evaluates a large quantity of data, performs several tasks with its algorithms, and connects to the Internet. This is where a virtual private network (VPN) comes into play. Even when used with AI, it helps retain security and privacy violations.

Integrating the functionalities mentioned above in next-generation VPNs will determine the VPN market in the future. Given the contemporary eras rapid technological evolution, you should expect to see these functions sooner.

See the original post:
How Artificial Intelligence Affects the VPN - Daily Bayonet

Read More..

The JD Technology Research Scholarship in Artificial intelligence – University of Sydney

1. Background

a. This Scholarship has been established to provide financial assistance to PhD students who are undertaking fundamental research in Artificial Intelligence.

b. This Scholarship is funded by a collaboration agreement between Jingdong Technology Co Ltd and the University of Sydney.

2. Eligibility

a. The Scholarship is offered subject to the applicant having an unconditional offer of admission to study full-time in a PhD within the Faculty of Engineering at the University of Sydney.

b. Applicants must be willing to conduct fundamental research into Artificial Intelligence and work on an agreed research topic supervised by academic staff in the School of Computer Science at the University of Sydney and research scientists at Jingdong Technology Co Ltd.

c. Applicants must also hold an Honours degree (First Class or Second Class Upper) or equivalent in a relevant discipline.

d. It is a condition of accepting this scholarship that awardees withdraw applications for Research Training Program (RTP) funding, including any RTP Allowance, RTP Fees Offset (international only), RTP Stipend or RTP Scholarships.

e. International students currently outside Australia must have applied for a student visa before they commence their studies.

f. The applicants must be willing to be affiliated with The UBTECH Sydney AI Centre.

3. Selection Criteria

a. The successful applicant will be awarded the Scholarship on the basis of:

I. academic merit, II. curriculum vitae, andIII. area of study and/or research proposal.

b. The successful applicant will be awarded the Scholarship on the nomination of the relevant research supervisor(s), or their nominated delegate(s).

4. Value

a. The Scholarship will provide a stipend allowance equivalent to the University of Sydneys Research Training Program (RTP) Stipend rate (indexed on 1 January each year) for up to 3 years, subject to satisfactory academic performance.

b. The recipient may apply for an extension of the stipend allowance for up to 6 months.

c. The Scholarship will provide $2500 per annum for same duration as the stipend for conference registration fees and conference travel costs to highly ranked conferences. This will be reimbursed to the student upon the provision of receipts through the lead supervisor.

d. Academic course fees and the Student Services Amenities fee (SSAF) are also provided for a successful international applicant, for 12 research periods, subject to satisfactory academic performance.

e. The recipient may apply for an extension of the academic course fees and SSAF for up to 2 research periods.

f. Periods of study already undertaken towards the degree prior to the commencement of the Scholarship will be deducted from the maximum duration of the Scholarship excluding any potential extension period.

g. The Scholarship is for commencement in the relevant research period in which it is offered and cannot be deferred or transferred to another area of research without prior approval.

h. No other amount is payable.

i. The Scholarship will be offered subject to the availability of funding.

5. Eligibility for Progression

a. The Scholarship is maintained by attending and passing the annual progress evaluation, completing 12 credit points of HDR coursework units by the end of year 2 and remaining enrolled in their PhD.

b. Student will be required to participate in the monthly research progress meetings with the principal supervisor from the faculty and the co supervisor from Jingdong Technology Co

6. Leave Arrangements

a. The Scholarship recipient receives up to 20 working days recreation leave each year of the Scholarship and this may be accrued. However, the student will forfeit any unused leave remaining when the Scholarship is terminated or complete. Recreation leave does not attract a leave loading and the supervisor's agreement must be obtained before leave is taken.

b. The Scholarship recipient may take up to 10 working days sick leave each year of the Scholarship and this may be accrued over the tenure of the Scholarship. Students with family responsibilities, caring for sick children or relatives, or experiencing domestic violence, may convert up to five days of their annual sick leave entitlement to carers leave on presentation of medical certificate(s). Students taking sick leave must inform their supervisor as soon as practicable.

7. Research Overseas

a. Scholarship recipients commencing in Australia may not normally conduct research overseas within the first six months of award.

b. Scholarship recipients commencing in Australia may conduct up to 12 months of their research outside Australia. Approval must be sought from the student's lead supervisor, Head of School and the Higher Degree by Research Administration Centre (HDRAC), and will only be granted if the research is essential for completion of the degree. All periods of overseas research are cumulative and will be counted towards a student's candidature. Students must remain enrolled full-time at the University and receive approval to count time away.

c. Scholarship recipients are normally expected to commence their degree in Australia. However, Scholarship recipients who are not able to travel to Australia to start their degree, may commence their studies overseas only if they have applied for their student visa and with the approval of their lead supervisor and Associate Dean (Research Education).

8. Suspension

a. The Scholarship recipient cannot suspend their award within their first six months of study, unless a legislative provision applies.

b. The Scholarship recipient may apply for up to 12 months suspension of the Scholarship for any reason during the tenure of the Scholarship. Periods of Scholarship suspension are cumulative and failure to resume study after suspension will result in the award being terminated. Approval must be sought from the student's supervisor, Jingdong Technology Co Ltd and the Faculty via application to the Higher Degree by Research Administration Centre (HDRAC). Periods of study towards the degree during suspension of the Scholarship will be deducted from the maximum tenure of the Scholarship.

9. Changes in Enrolment

a. The Scholarship recipient must notify HDRAC, Jingdong Technology Co Ltd and their supervisor promptly of any planned changes to their enrolment including but not limited to: attendance pattern, suspension, leave of absence, withdrawal, course transfer, and candidature upgrade or downgrade. If the award holder does not provide notice of the changes identified above, the University may require repayment of any overpaid stipend and tuition fees.

10. Termination

a. The Scholarship will be terminated:

I. on resignation or withdrawal of the recipient from their research degree,II. upon submission of the thesis or at the end of the award,III. if the recipient ceases to be a full-time student and prior approval has not been obtained to hold the Scholarship on a part-time basis, IV. upon the recipient having completed the maximum candidature for their degree as per the University of Sydney (Higher Degree by Research) Rule 2011 Policy,V. if the recipient receives an alternative primary stipend and tuition fees scholarship. In such circumstances this Scholarship will be terminated in favour of the alternative stipend and tuition fees scholarship where it is of higher value, VI. if the recipient does not resume study at the end of a period of approved leave, orVII. if the recipient ceases to meet the eligibility requirements specified for this Scholarship, (other than during a period in which the Scholarship has been suspended or during a period of approved leave). VIII. if the recipient commences their degree from overseas without having applied for their student visa.

b. The Scholarship may also be terminated by the University before this time if, in the opinion of the University:

I. the course of study is not being carried out with competence and diligence or in accordance with the terms of this offer,II. the student fails to maintain satisfactory progress, orIII. the student fails to attend and pass their annual progress evaluation and complete 12 credit points of HDR coursework units by the end of year 2, orIV. the student has committed misconduct or other inappropriate conduct.

c. The Scholarship will be suspended throughout the duration of any enquiry/appeal process.

d. Once the Scholarship has been terminated, it will not be reinstated unless due to University error.

11. Misconduct

a. Where during the Scholarship a student engages in misconduct, or other inappropriate conduct (either during the Scholarship or in connection with the students application and eligibility for the Scholarship), which in the opinion of the University warrants recovery of funds provided, the University may require the student to repay payments made in connection with the Scholarship. Examples of such conduct include and without limitation; academic dishonesty, research misconduct within the meaning of the Research Code of Conduct (for example, plagiarism in proposing, carrying out or reporting the results of research, or failure to declare or manage a serious conflict of interests), breach of the Code of Conduct for Students and misrepresentation in the application materials or other documentation associated with the Scholarship.

b. The University may require such repayment at any time during or after the Scholarship period. In addition, by accepting this Scholarship, the student consents to all aspects of any investigation into misconduct in connection with this Scholarship being disclosed by the University to the funding body and/or any relevant professional body.

12. Intellectual Property

The successful recipient of this Scholarship must complete the Student Deed Poll supplied by the University of Sydney.

13. Acknowledgement

a. The successful applicant must provide Jingdong Technology Co Ltd and the University of Sydney with a copy of any proposed publications within 45 days in advance of submitting for publication. Comments and/or reasonable amendments to the publication can be made by Jingdong Technology Co Ltd and the University of Sydney to protect their Confidential Information and/or Intellectual Property provided they are given to the publishing Party in writing no later than 45 days before the publication is made.

14. Privacy and Confidentiality

a. The successful applicant is required to keep all confidential information disclosed by the Jingdong Technology Co Ltd or the University of Sydney confidential and ensure it is not disclosed to a third party without the prior written consent of the University of Sydney or Jingdong Technology Co Ltd, as appropriate, or as required by law.

b. All information or data provided to the successful recipient by the Jingdong Technology Co Ltd must remain confidential unless used for the purposes of the research outlined in these terms and conditions.

c. The successful recipient must gain written consent from Jingdong Technology Co Ltd prior to use of any confidential information in any thesis or other publication authored by the recipient.

15. Other requirements

a. The successful recipient agrees to provide a copy of their thesis to Jingdong Technology Co Ltd in advance of submitting it for publication.

Go here to see the original:
The JD Technology Research Scholarship in Artificial intelligence - University of Sydney

Read More..

As Russia Plots Its Next Move, an AI Listens to the Chatter – WIRED

A radio transmission between several Russian soldiers in Ukraine in early March, captured from an unencrypted channel, reveals panicked and confused comrades retreating after coming under artillery fire.

Vostok, I am Sneg 02. On the highway we have to turn left, fuck, one of the soldiers says in Russian using code names meaning East and Snow 02.

Got it. No need to move further. Switch to defense. Over, another responds.

Later, a third soldier tries to make contact with another codenamed South 95: Yug 95, do you have contact with a senior? Warn him on the highway artillery fire. On the highway artillery fire. Dont go by column. Move carefully.

The third Russian soldier continues, becoming increasingly agitated: Get on the radio. Tell me your situation and the artillery location, approximately what weapon they are firing. Later, the third soldier speaks again: Name your square. Yug 95, answer my questions. Name the name of your square!

As the soldiers spoke, an AI was listening. Their words were automatically captured, transcribed, translated, and analyzed using several artificial intelligence algorithms developed by Primer, a US company that provides AI services for intelligence analysts. While it isnt clear whether Ukrainian troops also intercepted the communication, the use of AI systems to surveil Russias army at scale shows the growing importance of sophisticated open source intelligence in military conflicts.

A number of unsecured Russian transmissions have been posted online, translated, and analyzed on social media. Other sources of data, including smartphone video clips and social media posts, have similarly been scrutinized. But its the use of natural language processing technology to analyze Russian military communications that is especially novel. For the Ukrainian army, making sense of intercepted communications still typically involves human analysts working away in a room somewhere, translating messages and interpreting commands.

The tool developed by Primer also shows how valuable machine learning could become for parsing intelligence information. The past decade has seen significant advances in AIs capabilities around image recognition, speech transcription, translation, and language processing thanks to large neural network algorithms that learn from vast tranches of training data. Off-the-shelf code and APIs that use AI can now transcribe speech, identify faces, and perform other tasks, often with high accuracy. In the face of Russias numerical and artillery advantages, intercepting communications may well be making a difference for Ukrainian troops on the ground.

Primer already sells AI algorithms trained to transcribe and translate phone calls, as well as ones that can pull out key terms or phrases. Sean Gourley, Primers CEO, says the companys engineers modified these tools to carry out four new tasks: To gather audio captured from web feeds that broadcast communications captured using software that emulates radio receiver hardware; to remove noise, including background chatter and music; to transcribe and translate Russian speech; and to highlight key statements relevant to the battlefield situation. In some cases this involved retraining machine learning models to recognize colloquial terms for military vehicles or weapons.

The ability to train and retrain AI models on the fly will become a critical advantage in future wars, says Gourley. He says the company made the tool available to outside parties but refuses to say who. We wont say whos using it or for what theyre using it for, Gourley says. Several other American companies have made technologies, information, and expertise available to Ukraine as it fights against Russian invaders.

The fact that some Russian troops are using unsecured radio channels has surprised military analysts. It seems to point to an under-resourced and under-prepared operation, says Peter W. Singer, a senior fellow at the think tank New America who specializes in modern warfare. Russia used intercepts of open communications to target its foes in past conflicts like Chechnya, so they, of all forces, should have known the risks, Singer says. He adds that these signals could undoubtedly have helped the Ukrainians, although analysis was most likely done manually. It is indicative of comms equipment failures, some arrogance, and possibly, the level of desperation at the higher levels of the Russian military, adds Mick Ryan, a retired Australian general and author.

Read the original post:
As Russia Plots Its Next Move, an AI Listens to the Chatter - WIRED

Read More..