Page 1,831«..1020..1,8301,8311,8321,833..1,8401,850..»

Kumar awarded grant from the National Science Foundation – University of Alabama at Birmingham

A nearly $600,000 grant from the National Science Foundation has been awarded to a UAB Department of Computer Science assistant professor.

Sidharth Kumar, Ph.D. Photography: Steve WoodThe University of Alabama at Birminghams Sidharth Kumar, Ph.D., assistant professor in the College of Arts and Sciences Department of Computer Science, has received a grant from the National Science Foundation.

The $599,852 grant will fund the collaborative research project of Kumar and Utah State University titled, Collaborative Research: SHF: Small: Scalable and Extensible I/O Runtime and Tools for Next Generation Adaptive Data Layouts.

Kumar, who is principal investigator of the project, says the funding will support two doctoral students who will conduct cutting-edge research in the field of high-performance computing, focusing on large-scale data management and scientific visualization.

He hopes that their research will develop new algorithms and techniques to alleviate the data movement burden on large-scale supercomputers, with the goal to facilitate scalable parallel input/output, progressive in situ and post hoc data analysis.

We aim to develop production-ready solutions that can be deployed on supercomputers and can be used by users across domains, Kumar said.

Excerpt from:

Kumar awarded grant from the National Science Foundation - University of Alabama at Birmingham

Read More..

3 research universities to collaborate with industry, government to develop quantum technologies: News at IU: Indiana University – IU Newsroom

BLOOMINGTON, Ind. -- Quantum science and engineering can save energy, speed up computation, enhance national security and defense, and innovate health care. With a grant from the National Science Foundation, researchers from Indiana University (both Bloomington and IUPUI campuses), Purdue University and the University of Notre Dame will develop industry- and government-relevant quantum technologies as part of the Center for Quantum Technologies. Purdue will serve as the lead site.

"The Center for Quantum Technologies is based on the collaboration between world experts whose collective mission is to deliver frontier research addressing the quantum technological challenges facing industry and government agencies," said Gerardo Ortiz, Indiana University site director, scientific director of the IU Quantum Science and Engineering Center and professor of physics. "It represents a unique opportunity for the state of Indiana to become a national and international leader in technologies that can shape our future."

"This newly formed center is unique in many aspects," said Ricardo Decca, professor and chair of the Department of Physics at IUPUI. "It brings together experts in many scientific disciplines -- computer science, physics, chemistry, materials science -- from three universities and four campuses and companies developing the next generation of quantum-based information and sensing systems. The future seems very bright."

Given the wide applicability of quantum technologies, the new Center for Quantum Technologies will team with member organizations from a variety of industries, including computing, defense, chemical, pharmaceutical, manufacturing and materials. The center's researchers will develop foundational knowledge into industry-friendly quantum devices, systems and algorithms with enhanced functionality and performance.

"Over the coming decades, quantum science will revolutionize technologies ranging from the design of drugs, materials and energy harvesting systems, to computing, data security, and supply chain logistics," IU Vice President for Research Fred Cate said. "Through the CQT, Indiana will be at the forefront of transferring new quantum algorithms and technologies to industry. We are also looking forward to educating the quantum workforce for the future through the corporate partnerships that are integral to the funding model of the CQT."

Committed industry and government partners include Accenture, the Air Force Research Laboratory, BASF, Cummins, D-Wave, Eli Lilly, Entanglement Inc., General Atomics, Hewlett Packard Enterprise, IBM Quantum, Intel, Northrup Grumman, NSWC Crane, Quantum Computing Inc., Qrypt and Skywater Technology.

Additionally, the Center for Quantum Technologies will train future quantum scientists and engineers to fill the need for a robust quantum workforce. Students engaged with the center will take on many of the responsibilities of principal investigators, including drafting proposals, presenting research updates to members, and planning meetings and workshops.

The center is funded for an initial five years through the NSF's Industry-University Cooperative Research Centers program, which generates breakthrough research by enabling close and sustained engagement between industry innovators, world-class academic teams and government agencies. The IUCRC program is unique in that members fund and guide the direction of research through active involvement and mentoring.

Other academic collaborators include Sabre Kais, center director and distinguished professor of chemical physics at Purdue; Peter Kogge, the University of Notre Dame site director and the Ted H. McCourtney Professor of Computer Science and Engineering; and David Stewart, Center for Quantum Technologies industry liaison officer and managing director of the Purdue Quantum Science and Engineering Institute.

Here is the original post:

3 research universities to collaborate with industry, government to develop quantum technologies: News at IU: Indiana University - IU Newsroom

Read More..

AI that can learn the patterns of human language – MIT News

Human languages are notoriously complex, and linguists have long thought it would be impossible to teach a machine how to analyze speech sounds and word structures in the way human investigators do.

But researchers at MIT, Cornell University, and McGill University have taken a step in this direction. They have demonstrated an artificial intelligence system that can learn the rules and patterns of human languages on its own.

When given words and examples of how those words change to express different grammatical functions (like tense, case, or gender) in one language, this machine-learning model comes up with rules that explain why the forms of those words change. For instance, it might learn that the letter a must be added to end of a word to make the masculine form feminine in Serbo-Croatian.

This model can also automatically learn higher-level language patterns that can apply to many languages, enabling it to achieve better results.

The researchers trained and tested the model using problems from linguistics textbooks that featured 58 different languages. Each problem had a set of words and corresponding word-form changes. The model was able to come up with a correct set of rules to describe those word-form changes for 60 percent of the problems.

This system could be used to study language hypotheses and investigate subtle similarities in the way diverse languages transform words. It is especially unique because the system discovers models that can be readily understood by humans, and it acquires these models from small amounts of data, such as a few dozen words. And instead of using one massive dataset for a single task, the system utilizes many small datasets, which is closer to how scientists propose hypotheses they look at multiple related datasets and come up with models to explain phenomena across those datasets.

One of the motivations of this work was our desire to study systems that learn models of datasets that is represented in a way that humans can understand. Instead of learning weights, can the model learn expressions or rules? And we wanted to see if we could build this system so it would learn on a whole battery of interrelated datasets, to make the system learn a little bit about how to better model each one, says Kevin Ellis 14, PhD 20, an assistant professor of computer science at Cornell University and lead author of the paper.

Joining Ellis on the paper are MIT faculty members Adam Albright, a professor of linguistics; Armando Solar-Lezama, a professor and associate director of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and Joshua B. Tenenbaum, the Paul E. Newton Career Development Professor of Cognitive Science and Computation in the Department of Brain and Cognitive Sciences and a member of CSAIL; as well as senior author

Timothy J. ODonnell, assistant professor in the Department of Linguistics at McGill University, and Canada CIFAR AI Chair at the Mila -Quebec Artificial IntelligenceInstitute.

The research is published today in Nature Communications.

Looking at language

In their quest to develop an AI system that could automatically learn a model from multiple related datasets, the researchers chose to explore the interaction of phonology (the study of sound patterns) and morphology (the study of word structure).

Data from linguistics textbooks offered an ideal testbed because many languages share core features, and textbook problems showcase specific linguistic phenomena. Textbook problems can also be solved by college students in a fairly straightforward way, but those students typically have prior knowledge about phonology from past lessons they use to reason about new problems.

Ellis, who earned his PhD at MIT and was jointly advised by Tenenbaum and Solar-Lezama, first learned about morphology and phonology in an MIT class co-taught by ODonnell, who was a postdoc at the time, and Albright.

Linguists have thought that in order to really understand the rules of a human language, to empathize with what it is that makes the system tick, you have to be human. We wanted to see if we can emulate the kinds of knowledge and reasoning that humans (linguists) bring to the task, says Albright.

To build a model that could learn a set of rules for assembling words, which is called a grammar, the researchers used a machine-learning technique known as Bayesian Program Learning. With this technique, the model solves a problem by writing a computer program.

In this case, the program is the grammar the model thinks is the most likely explanation of the words and meanings in a linguistics problem. They built the model using Sketch, a popular program synthesizer which was developed at MIT by Solar-Lezama.

But Sketch can take a lot of time to reason about the most likely program. To get around this, the researchers had the model work one piece at a time, writing a small program to explain some data, then writing a larger program that modifies that small program to cover more data, and so on.

They also designed the model so it learns what good programs tend to look like. For instance, it might learn some general rules on simple Russian problems that it would apply to a more complex problem in Polish because the languages are similar. This makes it easier for the model to solve the Polish problem.

Tackling textbook problems

When they tested the model using 70 textbook problems, it was able to find a grammar that matched the entire set of words in the problem in 60 percent of cases, and correctly matched most of the word-form changes in 79 percent of problems.

The researchers also tried pre-programming the model with some knowledge it should have learned if it was taking a linguistics course, and showed that it could solve all problems better.

One challenge of this work was figuring out whether what the model was doing was reasonable. This isnt a situation where there is one number that is the single right answer. There is a range of possible solutions which you might accept as right, close to right, etc., Albright says.

The model often came up with unexpected solutions. In one instance, it discovered the expected answer to a Polish language problem, but also another correct answer that exploited a mistake in the textbook. This shows that the model could debug linguistics analyses, Ellis says.

The researchers also conducted tests that showed the model was able to learn some general templates of phonological rules that could be applied across all problems.

One of the things that was most surprising is that we could learn across languages, but it didnt seem to make a huge difference, says Ellis. That suggests two things. Maybe we need better methods for learning across problems. And maybe, if we cant come up with those methods, this work can help us probe different ideas we have about what knowledge to share across problems.

In the future, the researchers want to use their model to find unexpected solutions to problems in other domains. They could also apply the technique to more situations where higher-level knowledge can be applied across interrelated datasets. For instance, perhaps they could develop a system to infer differential equations from datasets on the motion of different objects, says Ellis.

This work shows that we have some methods which can, to some extent, learn inductive biases. But I dont think weve quite figured out, even for these textbook problems, the inductive bias that lets a linguist accept the plausible grammars and reject the ridiculous ones, he adds.

This work opens up many exciting venues for future research. I am particularly intrigued by the possibility that the approach explored by Ellis and colleagues (Bayesian Program Learning, BPL) might speak to how infants acquire language, says T. Florian Jaeger, a professor of brain and cognitive sciences and computer science at the University of Rochester, who was not an author of this paper. Future work might ask, for example, under what additional induction biases (assumptions about universal grammar) the BPL approach can successfully achieve human-like learning behavior on the type of data infants observe during language acquisition. I think it would be fascinating to see whether inductive biases that are even more abstract than those considered by Ellis and his team such as biases originating in the limits of human information processing (e.g., memory constraints on dependency length or capacity limits in the amount of information that can be processed per time) would be sufficient to induce some of the patterns observed in human languages.

This work was funded, in part, by the Air Force Office of Scientific Research, the Center for Brains, Minds, and Machines, the MIT-IBM Watson AI Lab, the Natural Science and Engineering Research Council of Canada, the Fonds de Recherche du Qubec Socit et Culture, the Canada CIFAR AI Chairs Program, the National Science Foundation (NSF), and an NSF graduate fellowship.

View original post here:

AI that can learn the patterns of human language - MIT News

Read More..

Doctoral Programme Computer Science job with UNIVERSITY OF HELSINKI | 306834 – Times Higher Education

The University of Helsinki doctoral programmes invite applications for doctoral researchers starting from 1 January 2023 for a 14 year period. The university's doctoral researchers, supervised by top-class researchers, carry out their research as part of an international academic community.

Position description

The duties of a doctoral candidate are to work on their doctoral thesis and to complete the doctoral studies determined by the curriculum of each doctoral programme. The duties may also include teaching and other tasks for up to 5 % of the annual working time. The duration of the employment contract depends on the phase of the appointees thesis and starts with a probationary period of six months.

The University of Helsinki offers:

The salary for doctoral candidates at the beginning of their dissertation work is usually 2000-2600 per month. As work on the dissertation progresses, the demands level of the salary rises. Salaries for doctoral candidate positions are based on level 24 of the job requirement scheme for teaching and research personnel in the salary system of Finnish universities. In addition, the doctoral candidate will be paid a salary component based on personal work performance.

The University has four doctoral schools, which offer 33 doctoral programmes. The University awards some 500 doctoral degrees annually.

Qualifications

Applications are evaluated based on the quality of the research plan, available supervision arrangements, and the ability and motivation, as demonstrated through previous studies, academic performance, or other previously acquired knowledge and experience, to pursue the doctoral degree according to the research plan and the accompanying study plan. The research plan must fit the research profile of the doctoral programme.

The appointee must obtain the right to pursue a doctoral degree at the University of Helsinki during the probationary period. It is possible to start the employment relationship only after the degree (higher university degree or equivalent), which makes you eligible for doctoral studies, has been completed and obtained.

Doctoral candidates selected for the positions are employed by, and carry out their research, in the Universitys academic units. The unit of employment may be a Faculty, a department within a Faculty, or an Independent Institute and is usually the academic unit of the primary supervisor.

Interested in the position?

The University of Helsinki is committed to promoting equality and preventing discrimination in all its operations. We encourage and welcome applications from people of all backgrounds.

Applicants will be informed of decisions in December 2022.

Acquaint yourself with the application instruction from this link:https://www.helsinki.fi/en/research/doctoral-education/university-funded-doctoral-researcher-positions

Join us in making the world a better place!

#helsinkiunicareers

See more here:

Doctoral Programme Computer Science job with UNIVERSITY OF HELSINKI | 306834 - Times Higher Education

Read More..

Election Officials Have Been Largely Successful in Deterring Cyber Threats, CISA Official Says – Nextgov

Increased coordination between federal agencies, election officials, and private sector election vendors has helped deter an influx of cyber threats directed at U.S. voting systems, an election official from the Cybersecurity and Infrastructure Security Agency said on Thursday during an event hosted by the Election Assistance Commission and Pepperdine University.

Mona Harrington, the acting assistant director of CISAs National Risk Management Centerwhich includes the agencys election security teamsaid that since election systems were designated as critical infrastructure in 2017, the attacks have become much more sophisticated and the volume of attacks has certainly increased. But with the partnerships that CISA and election officials have built, along with the products and services currently being used to mitigate potential risks, election officials have many of the tools needed to deter both nation state actors and non-nation state adversaries.

Harrington noted that all 50 states have deployed CISA-funded or state-funded intrusion detection sensors in their systems, known as Albert sensors, and that hundreds of election officials and private sector election infrastructure partners have signed up for a range of CISAs cybersecurity services, from recurring scanning of their systems for known vulnerabilities on internet-connected infrastructure to more in-depth penetration testing.

Technology and the evolving threat landscape has shaped the role of election officials, and election officials have seen a significant expansion of their duties beyond simple election administration to a position more akin to technology and information managers and IT managers, Harrington said.

The series of election-related panels hosted by EAC and Pepperdine University were held in recognition of the Help America Vote Act, the 2002 law that established the EAC and made sweeping changes to voting systems and election administration following the 2000 presidential election. Known as HAVA, the law, in part, requires EAC to develop voluntary voting system guidelines which outlines the security, reliability and accuracy requirements that voting systems are tested against in order to receive certification under the EACs testing and certification program.

Last year, the EAC adopted its voluntary voting system guidelines 2.0 to further enhance the testing requirements for voting systems. No election vendors have received VVSG 2.0 certification thus far, however, and voting systems are unlikely to be certified under the new guidelines until at least 2024.

Beyond the updated guidelines for securing and certifying voting systems across the country, some of the panelists discussed the need to develop standards for securing non-voting systems as well, such as electronic poll books and voter registration systems. EAC announced in 2020 that it was partnering with the Center for Internet Security to launch a non-voting election system technology verification pilot program, although it remains unclear whether this pilot will lead to broader adoption or the issuance of non-voting system guidelines from EAC. A report on the pilot, called RABET-V, was released in January 2021.

Traci Mapps, the vice president of SLI Compliancea certification body that operates the EAC-accredited voting system test laboratorysaid that all components of the election process, including non-voting systems, should receive testing to ensure they are meeting set standards.

As a voting system test lab weve participated in a lot of that testing, but I do feel that there should be a central set of standards that these systems are tested to so that they can be certified and help election officials to make sure that these systems are secure, Mapps said.

Even as EAC, election officials, and private sector election infrastructure partners continue to enhance their collaborative efforts to secure voting systems, there remains a need for greater public awareness of the multi-level safeguards and testing that go into securing U.S. elections. Mapps noted that the majority of states already require that their voting systems are certified by the EAC or tested in a voting system test lab, and that sharing that information more broadly with the general population could help combat some of the misinformation and disinformation that threatens to undermine public trust in election results.

Time and time again, I talk to people and they have no idea that there are voting system test labs out there that are doing testing on voting systems, Mapps said. And I think educating people to let them know about the testing thats being done may be helpful.

But election officials and CISA remain confident about the security of election systems, particularly with the strong safeguards that are already in place to deter nation state actors and other cyber adversaries. And when it comes to some of the more outlandish conspiracies surrounding the 2020 electionincluding the unfounded claims that election results were somehow filtered through networks in other countriesHarrington said that existing procedures and controls largely mitigate the potential for that type of large-scale outside intrusion.

The evidence is not there, but there are also a lot of controls that are in place to mitigate that kind of risk, Harrington said, citing logic and accuracy testing, post-election tabulation audits and other security measures as some of the common procedures that would identify such an occurrence.

Read more:
Election Officials Have Been Largely Successful in Deterring Cyber Threats, CISA Official Says - Nextgov

Read More..

New UK Telecoms and Internet Security Code to Go Live in October – ISPreview.co.uk

The UK Government has announced that network providers (e.g. broadband ISPs and mobile operators) will become subject to new regulations under the Telecommunications (Security) Act from 1st Oct 2022, which aside from restricting the use of Huawei, will also impose changes to make networks safer from cyberattack.

Just to recap. The TSA became law in November 2021 (full summary). The goal was to impose stronger legal duties on public telecoms providers to help defend their networks from cyber threats, which could cause network failure or the theft of sensitive data. Few could disagree with that desire, although politicians who tend not to fully understand how such networks work in the real-world are often terrible at getting technical rules right.

The new framework hands significant new powers to the Government and Ofcom, enabling them to intervene in how telecommunications companies run their business, manage supply chains, design and even operate networks. Fines of up to 10% of turnover or 100,000 a day will be issued against those that fail to meet the required standards, which would be a particularly big burden for smaller players.

Digital Infrastructure Minister, Matt Warman, said:

We know how damaging cyber attacks on critical infrastructure can be, and our broadband and mobile networks are central to our way of life.

We are ramping up protections for these vital networks by introducing one of the worlds toughest telecoms security regimes which secure our communications against current and future threats.

The related Code of Practice (CoP) for all this puts telecoms providers into three tiers, which are filtered according to size and importance to UK connectivity (i.e. the smallest players see softer regulation). Tier 1 providers are the biggest players (e.g. BT, Vodafone, Virgin Media / VMO2 etc.), while Tier 2 providers are medium-sized players (e.g. Hyperoptic, Zen Internet) and Tier 3 reflects the smallest companies (those that are not micro-entities).

One catch above is that some smaller providers may supply parts of networks and services owned by larger Tier 1 or Tier 2 providers. In that case, the regulations stipulate that where a provider acts as a third-party supplier to another provider, they must take security measures that are equivalent to those taken by the provider receiving their services.

Telecoms providers will be legally required to:

Protect data stored by their networks and services, and secure the critical functions which allow them to be operated and managed;

Protect tools which monitor and analyse their networks and services against access from hostile state actors;

Monitor public networks to identify potentially dangerous activity and have a deep understanding of their security risks, reporting regularly to internal boards; and

Take account of supply chain risks, and understand and control who has the ability to access and make changes to the operation of their networks and services.

The Government, which has been consulting on the implementation of all this since March 2022 (here), have today issued their response (here). Overall, there were 38 responses to the consultation, from public telecoms providers, industry trade bodies and telecoms suppliers etc. As a result of this, a number of changes have been made to the regulations, which may help to soften the blow a bit. Weve summarised some of them below.

Changes to the Regulations Post-Consultation

The draft code stipulated that providers should offer their customers a no-additional-cost replacement of customer premises equipment (e.g. broadband routers) supplied by that provider, once that equipment had gone out of third party support. But operators warned that the cost of doing this would be extreme. The Government have thus amended the draft code of practice to remove the suggestion that providers should replace CPE at no extra cost to the customer.

The implementation timeframes for Tier 1 providers are now aligned with the Tier 2 timeframes, with the exception of the timeframes for the most straightforward and least resource intensive measures. Tier 1 providers will, therefore, now be expected to:

implement the most straightforward and least resource intensive measures by 31 March 2024 implement relatively low complexity and low resource intensive measures by 31 March 2025 implement more complex and resource intensive measures by 31 March 2027 implement the most complex and resource intensive measures by 31 March 2028

This approach, said the Government, would ensure that all public providers are afforded appropriate time to implement measures while preserving the need for new security measures to be introduced as soon as is feasible. Previously they sought some implementation by 31st March 2023 and that, complained operators, would have been very costly and difficult to achieve.

Clarifications were made to ensure security measures are targeted at the parts of networks most in need of protection, like new software tools that power 5G networks. In addition, its specifically noted that private networks are NOT in scope of the new security framework introduced by this Act.

Inclusion of further guidance on national resilience, security patching and legacy network protections, to help providers understand actions that need to be taken.

Despite the changes, it remains a reality that practically applying such rules to hugely complex national telecommunications networks, with global connectivity and supply chains to consider, will not be so easy (i.e. modern software, internet services and hardware is all produced with bits and pieces, as well as connectivity, from across the world). Much will also depend upon Ofcoms approach, which were still waiting to see (here).

The related Electronic Communications (Security Measures) Regulations will now be laid in Parliament for Parliamentary scrutiny under the negative procedure. It is intended that the regulations will subsequently come into force on 1st October 2022. On the same day as the regulations, the draft Telecommunications Security Code of Practice will also be laid in Parliament, in accordance with Section 105F of the Communications Act 2003. If neither House resolves against the draft code of practice within 40 sitting days, it will then be issued and published in final form.

Ofcom will regulate the new framework in accordance with its new functions under the Act to seek to ensure that public telecoms providers comply with their security duties. Ofcom has a clear remit to work with public telecoms providers to improve the security of their networks and services and monitor their compliance, including the power to request information.

The regulator is expected to begin this process in advance of the first implementation timeframes in the draft code of practice, which are set for completion by 31st March 2024. Ofcom will naturally produce its own procedural guidance on its approach to monitoring and enforcing industrys compliance with the security duties, and has consulted publicly on a draft of this.

Continued here:
New UK Telecoms and Internet Security Code to Go Live in October - ISPreview.co.uk

Read More..

Early cyber hygiene adoption key in fighting security threats – Deccan Herald

Instilling values of cyber hygiene at a young age is critical in countering cyber security threats, Sanjay Kumar Das, Joint Secretary (Department of IT and Electronics), Government of West Bengal, has said.

Speaking toDH on the sidelines of a cyber security congress organised by Infosec Foundation, a Kolkata-based not-for-profit, Das said a bottom-up approach is a way forward in addressing gaps in awareness of cyber hygiene practices, that facilitate cyber crimes.

He said states, including Karnataka, Andhra Pradesh and Telangana, are doing commendable work in tackling emerging cyber security challenges even as the technologies evolve at a staggering pace.

Speaking about the threats posed by unregulated digital loan apps and aggressive recovery methods, he said customers need to understand the importance of due diligence and ask themselves basic questions like Why is this loan being offered to me? before accepting the loans.

Also Read |Google report reveals Iranian hacker group updates malware to steal data from inboxes

We need to create awareness on these practices among students, from schools to colleges. The focus should be on addressing the cause, not the outcome. Fake loan apps are an outcome, he said.

Das said the Cyber Security Centre of Excellence under the IT and Electronics Department in West Bengal has been taking up awareness initiatives, including a comic book with stories around 12 common cyber crimes including matrimonial fraud, phishing, fake calls from banks and fake modelling offers.

Focus on new challenges

Cyber security experts discussed the challenges in complying with global internet security standards during sessions in the congress.

Sushobhan Mukherjee, chairman of Infosec Foundation, told DH that the congress was aimed at providing a platform for stakeholders including the governments, industry and the public to collaborate on solutions for cyber security challenges.

Duringa moderated session on Artificial Intelligence and Machine Learning applications in cyber security, experts underlined the effectiveness of AI systems in responding to malware and other forms of cyber aggression.

The session also saw the panelists caution users against developing a false sense of security with AI systems.

Here is the original post:
Early cyber hygiene adoption key in fighting security threats - Deccan Herald

Read More..

Why OT Environments Are Getting Attacked And What Organizations Can Do About It – Spiceworks News and Insights

As usual, financial gain is the biggest motivation behind cyber hacks against operational technology. About 80% of OT environments were nailed by ransomware scams last year. Etay Maor, senior director of security strategy for Cato Networks, discusses how aging technology, infrequent patching made difficult by work stoppages, and limited security resources make OT systems vulnerable, and how organizations could mitigate these challenges.

Much has changed for operational technology (OT) in the past decade. The rising demand for improved connectivity of systems, faster maintenance of equipment and better insights into utilization of resources has given rise to internet-enabled OT systems, which include industrial control systems (ICS) and others such as supervisory control and data acquisition (SCADA) systems, distributed control systems (DCSs), remote terminal units (RTUs), and programmable logic controllers (PLCs).

With everything becoming internet-facing and cloud-managed, the manufacturing and critical infrastructure sector (i.e., healthcare, pharma, chemicals, power generation, oil production, transportation, defense, mining, food and agriculture) are becoming exposed to threats that may be more profound than data breaches. Gartner believes that by 2025 threat actors will weaponize OT environments to successfully harm or kill humans.

See More: Recovering From a Cybersecurity Earthquake: 4 Lessons Companies Must Learn

According to SANS research, there are four key reasons why cyber criminals attack OT and Industrial Control Systems (ICS) environments: Ransomware or financial crimes; state-sponsored attacks that cause wide-scale disruption like NotPetya (credited for causing massive collateral damage and the worlds first power blackouts); attacks by non-state attackers for terrorism or hacktivism (e.g., Oldsmar, FL water treatment facility hack) and attacks on devices and things that cannot protect themselves. Financial crime is the biggest driver, with 80% of OT environments experiencing a ransomware attack last year.

A number of reasons make OT/ICS environments vulnerable:

We need to fundamentally change our thinking in terms of how we build these systems and whether or not they should be so readily accessible. Here are best practices that can help:

Legacy cybersecurity approaches are predicated around protecting technology, but this approach becomes irrelevant with internet-facing OT. This can be easily demonstrated with the Purdue model, where historically, information flows from level zero to level one to level two and back. It did not have to flow through a network but through machines connected to networks. Security teams have to lock these machines down to secure their infrastructure. Today, with the proliferation of ethernet on the manufacturing floor, any level can communicate with the external world; hence, this approach has become obsolete. Enterprises must instead follow a micro-segmentation approach where security can be layered on each functional area within the process to contain any attack.

With more and more ICS networks embracing the benefits of the cloud, the perimeter is no longer the defensible position it once was. Studies show that Level 3 of the Purdue Model (which processes data from the cloud or higher-level business systems) is affected by the most number of vulnerabilities. Moreover, the rise of remote work and the growing use of remote administration applications like VNC (virtual network connection) and RDP (remote desktop protocol) requires a strong identity access management solution that does not extend too much trust to authorized users. Leveraging SASE (secure access service edge), which converges SD-WAN (software-defined wide area networking) and SSE (security service edge) into a global cloud service, is one-way enterprises can manage, control and monitor the connectivity of data centers, branches and edges and implement a never trust, always verify approach.

Industrial security is a team sport. You need vast experience and knowledge so many different disciplines: chemical engineering, process engineering, mechanical engineering, electrical engineering, human psychology, cybersecurity, industrial networking, traditional networking and cloud services. Since most threat actors tend to live off the land before they reveal themselves, it is important for security teams to have a pulse on not just cyber variables but also process variables and physical variables like temperature, pressure flow, movement, time, etc.

Employees, vendors, partners, asset owners, engineering teams and operators are jointly needed to mitigate potential threats and deliver effective incident response effectively.

Industrial environments must always be safe, secure, and operational. Safety should be treated as one of the most foundational elements alongside availability, integrity, and confidentiality.

How are you protecting your OT environment? Share with us on Facebook, Twitter, and LinkedIn. Wed love to know!

See the article here:
Why OT Environments Are Getting Attacked And What Organizations Can Do About It - Spiceworks News and Insights

Read More..

In Defense of the Global, Open Internet – Lawfare

In the global race for internet governance, freedom is the Wests strategic advantage. And yet, a recent report from the Council on Foreign Relations (CFR) declares provocatively that the era of the global internet is over. The reports evidence for this claim is an assertion that the past decade-plus of democratic investment in global internet freedom has failed, and it is therefore time for the United States to jettison the vision it has championed of a global, open, secure, and interoperable internet. The report argues that the United States should focus instead on responding to the geopolitically driven cyber activities of China and Russia, countries that position the internet as a cyber-military battlefield rather than a space designed to empower innovation and social progress. As CFRs Adam Segal writes in Lawfare, this is an intentional departure from the organizations 2013 report and reflects a sense of lost possibility and influence. Indeed, the world has changed. But moving the goalpost in by abandoning even the aspiration of protecting global human rights online, as the new report recommends, would be a strategic mistake. It would likely harm individuals living in repressive environments in the short term and hamper the ability of Western governments to advance shared goals of security and openness in the long term.

Cyber warfare and information warfare are undoubtedly in our midst. However, embracing the CFR reports narrative and changing the course of U.S. policy in response to the continued trajectory of attacks not only would undermine human rights, democracy, and the internet itself but also would empower governments like China and Russia that benefit most from the every country for itself approach to the digital world. Instead, the United States should recommit to its vision for internet freedom by articulating and demonstrating how democratic states can address complex cybersecurity threats and digital harms through innovative, collaborative, and democratic means.

The CFR report proposes three pillars for a new U.S. foreign policy. Notably, the specific proposals put forth in the report are not incompatible with internet freedom; but they fail toindividually or collectivelyeffectively replace it.

First, the report calls on the United States to confront reality and bring together allies and friends around a new vision for the internet, by prioritizing a trusted, protected international communication platform. Securing communications online is a worthy goal and one that can and should be developed collectively through multilateral mechanisms. But within the reports absolutist paradigm, this recommendation futhers an explicit us versus them dynamic on the international stage, declaring that some governments are sufficiently aligned with U.S. interests to be permitted into the club, while others will be excluded.

Putting aside the practical challenges of deciding who gets in and who stays out (for a taste of how messy this would get, look at the invitations to the 2021 U.S. Summit for Democracy), this approach is at odds with the globally interconnected infrastructure and protocols that make up the internet. The internet is a network of networks, and despite the advanced information controls imposed in some jurisdictions, its technical designincluding the critical Internet Protocol and Border Gateway Protocolare designed to maintain interconnection above all else. Separating countries into friends and enemies also, ironically, buttresses the long-standing goals of China, Russia, Iran, and other authoritarian regimes to center internet governance in cyber sovereignty rather than internationally protected human rights.

In a moment of historic expansion of internet connectivity, most governments around the world still havent firmly established their position on the spectrum between an authoritarian and freedom-centric approach to internet governance. If the United States, in particular, portrays the future of the internet as inevitably isolationist, it is as likely to push governments toward authoritarian models as it is to incentivize governments away from them. This could result in a potentially disastrous fait accompli that will likely imperil innovation, equity, economic growth, and human rights in the decades ahead.

A shift toward walling off countries with differing views not only would provide normative validation for existing national firewalls but also would abandon the people within those countries seeking to realize their rights. This would contradict the Biden administrations Presidential Initiative for the Democratic Renewal and the recent U.S.-led Declaration for the Future of the Internet, which provides a clear and compelling alternative by creating an opportunity to join for partners who actively support a future for the Internet that is open, free, global, interoperable, reliable, and secure without boxing other nations into choosing a side.

Second, the report calls for U.S. foreign policy to balance more targeted diplomatic and economic pressure and more disruptive cyber operations with self-imposed restraint among U.S. allies. It is possible to consistently promote a global open internet while increasing diplomatic, economic, and digital pressure to support that goal. Where tensions arise, such as when Ukraine asked the Internet Corporation for Assigned Names and Numbers to disconnect Russia from the global internet, the balance typically lies in favor of preserving the global internet. American policy can and should reinforce this.

In its pursuit of a more globally harmonized internet policy, the United States must complement its outreach to current allies and its response to current threats with greater engagement with the majority world. Businesses in these regions benefit from access to American capital, markets, and partners. Their governments can realize incredible benefits through joint economic programs and global digital flows, showing the merits of openness and freedom rather than oppression and manipulation.

Focusing primarily on increasing pressure on adversaries is likely to mean taking attention and resources away from direct support to and engagement with the myriad countries whose only existing option for stronger internet infrastructure has been, and remains, the acceptance of Chinese aid and its accompanying influence. China has invested massively around the world through the Belt and Road Initiative, including in global network infrastructure, creating debt and dependencies across a wide number of states. And Chinas narrative of control is likely attractive to governments seeking to expand their domestic authority both online and offline. But for every Myanmar-like setback, there is usually a Sri Lanka-like opportunity. The United States would be wise to continue investing in open internet policies that facilitate democratic turns and position itself to provide critical assistance to convert these moments into lasting, democratic change.

The report is correct in asserting that America must update its approach to cyber defense, including responses to cyberattacks at all levels of severity and foreign disinformation campaigns. But again, that can happen while also asserting that internet freedom is a universal goal and that a siloed internet is ultimately unsustainable and counterproductive for all nations.

Third, the report asserts that the United States needs to put its own proverbial house in order. This statement is entirely accurate. There is much work to be done to match the leadership of the European Union and construct a suitable American regulatory framework for privacy, data use, platform accountability, and other issues. The report is correct in highlighting the multiyear gap between Brussels and Washington on data protection in particular, and the consequences of this disconnect for global connectivity and commerce. But in seeking to close this gap, the how matters. U.S. legislative and regulatory efforts must serve, and not subordinate, American economic and social goals. In building the response to this challenge, the U.S. playbook must be clearly distinguishable from that of repressive states, or they will have won the ultimate war. The United States must return to its roots of global power online, which lie in openness and fostering a climate of innovation.

In sum, the CFR report seems to equate a free and global internet with anarchy at worst and naive insecurity at best. That is simply not true. Internet freedom posits a rights-centered and rules-based approach to internet governance. Necessary efforts that restrict rights are allowed under international human rights law, when they are clearly articulated, serve legitimate purposes, are proportionately tailored, and are accompanied by relevant accountability and transparency measures. These are the yardsticks against which future actions will continue to be measured, regardless of how the United States frames its cyber policy. They also happen to be the clearest principles policymakers and analysts can use to draw distinctions between authoritarian approaches and democratic ones.

So what? Does it matter whether the goal of foreign policy is a global, open, and free internetrecognizing the impossibility of a perfect end stateor instead a trusted, protected international communication platform among allies? Particularly when many of the same near-term tactics, and many of the recommendations in the CFR report, would likely be the same regardless of how the objectives and strategies are framed?

In fact, it does mattera lot. Governance of the digital world is perhaps the greatest geopolitical competition of our generation. The internets infrastructure is deeply and inherently interconnected and constantly evolving. Stasis and detente are not concepts that translate well in this space and cannot realistically serve as goals of U.S. policy, for better or worse. If the United States steps back in the fight for global internet freedom, other forces will most likely step up and continue to degrade it, exponentially expanding the scale and scope of repression and of harm to human rights.

Focusing on negatives also risks ignoring much of the value that the internet has created and continues to create. And the primary remaining value that the United States must prioritize is freedom. As one of us has argued previously, when compared to offline spaces, the internet continues to create significant opportunities for courageous, consequential, and U.S.-interest-aligned activities including independent journalism, accountability, and the protection of minority rights.

In all likelihood, this contrast in narratives is reflective of perspective and process. The CFR report is bereft of participation from civil society and digital rights activists, including those who have carried the torch of internet freedom in repressive environments. These stakeholders have the best perspective of internet repression, how it is experienced, and how to counter it. Their voices, undoubtedly, would have changed the report, which instead focuses on nation-state-level considerations and concerns. Unsurprisingly, the result is a framing of internet repression as a tactic of state powerwhich it is, but not solelyas well as a lack of appreciation of the full impact of internet freedom. Granular effort to help individuals realize their rights improves daily life around the world and contributes to organizing and building power that can challenge ossified authoritarian states and systems.

Centering internet-related foreign policy around freedom, rather than nation-state conflict, provides a strategic advantage in the long term as well as immediate benefits for the realization of human rights in repressive environments. Rather than choose isolation, the Biden administration should double down on the collaborative model that governs the internet, increase its investments in internet freedom, clarify U.S. domestic approaches while working to build alignment on internet policy around the world, and lead by example to show that openness and innovation build the best path to socioeconomic success.

Supporting the true nature of the internet as global, open, and free portrays the repressors of internet freedom as reactive, aberrant, fragile, and ultimately temporary. It is true that the walls of repression have grown taller. It is also true that those walls are filled with cracks. The United Statess best response is not to build walls of its own but, instead, to support the expansion of human rights and democratic norms to nations around the world as the global internet continues to grow and evolve.

The rest is here:
In Defense of the Global, Open Internet - Lawfare

Read More..

Everything You Need to Know About SD-WAN – Spiceworks News and Insights

Software-defined WAN or SD-WAN is a virtual wide area network (WAN) that relies on software technologies like internet-based communication tunnels, software-driven network encryption, firewall software, etc. to operate a mid-sized to large-scale computer network spread across locations. This article explains how SD-WAN works, its benefits, and the best SD-WAN solutions in the market.

Software-defined WAN or SD-WAN is defined as a virtual wide area network (WAN) that relies on software technologies like internet-based communication tunnels, software-driven network encryption, firewall software, etc. to operate a mid-sized to large-scale computer network spread across locations.

A software-defined wide area network (SD-WAN) uses software-defined technology and infrastructure. SD-WAN dissociates the networking hardware from the control mechanism and thus streamlines the WANs operation and management. Organizations that use SD-WAN solutions can build higher-performance WANs using inexpensive internet and at significantly lower costs than private WAN connection technologies such as multiprotocol label switching (MPLS).

SD-WAN solutions make it easier for organizations to manage firewalls and routers, upgrade software and firmware, virtual private networks (VPN), and remote clients through a centralized management interface. The centralized management control is used to securely and efficiently route traffic across the WAN directly to trusted providers such as software-as-a-service (SaaS) and infrastructure-as-a-service (IaaS). It also minimizes labor costs by cutting maintenance costs and lowers the cost of equipment.

During the early years, WANs required backhauling of all traffic from branch offices to a data center where they applied advanced security services. Traffic between the source and data centers was based on complex routing protocols such as transmission control protocol (TCP/IP) addresses and control list tables.

Ultimately, it leads to delays resulting in poor application performance, user experience, and huge costs due to expensive bandwidths. Users also had to incur expenses to install MPLS routers at each location. Performing upgrades on firmware or software took longer times due to network complexities. The network architecture was also not optimized for cloud infrastructure. The limitations of traditional WANs drove the change to a better SD-WAN technology that replaced MPLS.

SD-WAN is deployed in an organized way in branch offices and data centers. It is optimized for cloud infrastructure and associates cloud technology with mobile computing. It separates the data plane and control plane of the network. It has a centralized management interface where traffic is managed and monitored. It has a single management portal which reduces complexities and makes it easier to track applications, thus improving performance and operational efficiencies.

By providing lower-cost infrastructure and transport costs, an organization can save. SD-WAN provides end-to-end encryption over the entire network, providing secure connections to its users. Additionally, SD-WAN can prioritize traffic on business-critical applications and route it through the most efficient pathway.

See More: How Does an Edge Network Work and What Does Its Future Hold? AT&Ts Theresa Lanowitz Answers

The main objective of SD-WAN is to connect end-users and the applications, notwithstanding the location of these end-users. SD-WAN drives traffic as per the business requirements of the application. These business requirements vary from the priority of the application to must-enforced security policies or application performance required. Usually, critical mission applications are given the highest priority. The networking approach may vary from MPLS to broadband to 4G LTE.

The SD-WAN architecture separates the control and management functions, applications, and WAN transport services. It has a centralized control plane that stores and manages all the data on the traffic and applications. The centralized control plane monitors and adapts traffic to suit the application demand and delivers the optimum experience.

The following are features of SD-WAN that users should consider before choosing an SD-WAN solution model:

See More: How To Make Networks Ready for Cloud-First Era With SD-WAN

SD-WAN allows organizations and small businesses to securely connect their users to applications by taking advantage of any combination of network services. When choosing the right SD-WAN solution providers, users should consider factors such as security, price, availability of hybrid wide area network (WAN) solutions, and the ease at which they can be deployed. The top 10 SD-WAN solutions include:

Powered by Meraki, Cisco SD-WAN is a scalable, programmable, and open solution that allows users to connect to any application. It offers control, visibility, and real-time analytics to its users. Cisco SD-WAN offers cloud management services and it can also be deployed on-premise. It is integrated with capabilities that allow it to perform optimization of applications, unified communications, multi-cloud services, and security.

Fortinet FortiGate provides a secure networking approach that combines SD-WAN, advanced routing, and next-generation firewall (NGFW) to promote consistent security and network policies and reduce operational costs through automation, self-healing, and deep analytics. This also simplifies wide-area network (WAN) architecture by accelerating network and security convergence. Fortinet FortiGate SD-WAN offers improved multi-cloud application performance through multi-path control, application steering and identification.

Oracle SD-WAN provides users with simplified WAN management services such as SD-WAN, firewall, routing, and WAN optimization. It provides users with high bandwidth and inexpensive internet connections and delivers easy-to-deploy and manages the network. Oracle SD-WAN offers its users reliable, quality, flexible and secure services. With its high availability, users can enjoy faster applications and better networks. It also allows for safer migrations of applications into the public cloud.

Citrix SD-WAN combines cloud-delivered and comprehensive security with SD-WAN, analytics, and secure internet access. It has strong security at the WAN Edge, providing complete protection against all threats. Its Citrix cloud on-ramps feature provides flexible on-ramp options for any cloud access that simplifies multi-cloud transition. Citrix SD-WAN reduces network costs and increases agility.

CenturyLink SD-WAN unifies network management across different network types, creating an agile and responsive wide area network. It enables users access to bandwidth to leverage broadband connections for bandwidth-intensive applications. It provides users with data analytics and reports while offering performance-based application routing. CenturyLink SD-WAN offers a reliable solution that allows users to reduce operating costs for equipment and staff.

Wanify has partnered with VeloCloud to deliver VeloCloud SD-WAN. It manages end-to-end processes and improves network performances by combining multiple connections for its users. It supports network agility and application growth by offering optimized access to cloud applications and data centers. It routes application traffic through efficient routes after gauging the real-time performance of the network. Wanify SD-WAN provides customer support and offers a secure and customizable solution for its clients. It also manages carriers for its users.

See More: What Is a Mesh Network? Meaning, Types Working, and Applications in 2022

Palo Alto Networks offer SD-WAN services through Prisma. It provides networking and security in a single platform. It enables app-defined policies for SD-WAN that eliminate network problems, increase bandwidth, and simplify management for its users. Palo Alto Networks Prims SD-WAN allows users superb control and connection options along with supporting machine learning and automation. It also provides users with router modernization and cloud migration.

Exinda SD-WAN provides businesses with a stable, secure, reliable, and cost-effective solution. It combines and manages up to 12 internet kinds of transport from local service providers. The Exinda SD-WAN network router monitors, detects, and adapts to fluctuations from internet service providers and also monitors traffic changes. It automatically solves network problems, thus avoiding interruptions to internet services and applications.

It allows users to add bandwidths to their networks when they need to increase network capacities. Integrating Exinda SD-WAN and Exinda network orchestrator enhances the ability to accelerate applications to better performance.

Masergy SD-WAN leverages its secure edge network with built-in Fortinet security. It provides clients with end-to-end visibilities and uses artificial intelligence for IT operations (AIOps) to analyze networks and make recommendations to improve reliability. It uses AIOps and shadows IT discovery tools to build overlays to fit networks. It customizes rules to meet network and application requirements. Masergy SD-WAN allows for co-managing with its users to streamline inefficiencies.

Aryaka SD-WAN has a built-in WAN optimization that guarantees application performance for this feature-rich platform. Aryaka SD-WAN service doesnt need the installation of complex appliances or network management software as it is a remote-based cloud system. Users can connect to it through virtual private networks (VPN). Aryaka SD-WAN provides insightful analytics in a secure platform that offers a multi-cloud networking service. It provides reliable throughput, real-time visibility, and single-day deployments for new technology.

See More: What Is Network Management? Definition, Key Components, and Best Practices

The global software-defined wide area network (SD-WAN) market size is expected to increase exponentially from $1.9 billion in 2020 to $8.4 billion by 2025. This figure represents a compound annual growth rate (CGAR) of 34.5%, as per research by MarketsAndMarkets. These figures express an increasing appetite for SD-WAN solutions from enterprises due to a slew of business benefits. These include:

In the recent past, business enterprises and other organizations have embraced advanced technologies to gain an edge against their competitors in the market. However, its adoption has brought on its fair share of problems in the form of cybercrimes.

Most SD-WAN solutions offer basic built-in security features like firewall and VPN functions that improve security for their users. Additionally, users looking for advanced security features can look for SD-WAN solutions offering features to prevent data loss, downtime, and legal liabilities. Popular SD_WAN solutions include next-generation firewalls (NGFW), intrusion prevention systems (IPS), encryption, and sandboxing capabilities.

Users can configure SD-WAN to steer their business traffic through the most efficient route by prioritizing real-time services such as voice over internet protocol (VoIP) and business-critical traffic. SD-WAN, through its flexibility, allows users to vary bandwidth access via any local internet provider to promote increment in speeds to match real-time demand. Varying bandwidth using deduplication and compression also helps in reducing the total cost of ownership (TCO).

SD-WAN allows for bandwidth capacity to be scaled up or down through the direct addition of internet broadband connectivity. A single logical link can be formed when multiple WAN service types, such as direct internet or private multiprotocol label switching (MPLS), are bonded together.

Other optimization techniques that SD-WAN employs to improve network agility include data de-deduplication, data compression, and secure sockets layer (SSL).

According to a 2018 forecast survey by IDC Research, up to two-thirds of respondents expect to save 5-19%, while a quarter expect upwards of 39% savings when using SD-WAN technologies. SD-WAN technology allows for self-managed procedures and automation, which enables organizations to reduce the number of external IT experts required to carry out periodic tests and maintenance, thereby proving to be cost-effective.

SD-WAN aggregates multiple direct-to-internet (DIA) lines for WAN connectivity, thus reducing the overall cost for bandwidth as it requires less network hardware. Organizations can also easily set up new branches online at any location at less time and cost.

As small businesses use more technology solutions such as local, edge, and cloud-based applications, network complexity becomes a common problem. This is due to competition for limited bandwidth, which leads to poor network performance. It might also necessitate hiring more IT specialists on-site to manage local IT infrastructure, leading to increased costs. SD-WAN provides a solution through monitoring and alerting the performance of different data types to ensure enough bandwidth is allocated. Users can configure SD-WAN to prioritize critical traffic through the most efficient path to its destination to improve performance.

SD-WAN is usually managed through a centralized management interface that monitors it and manages traffic. From a single management portal, paths to applications are allocated according to criticality, new sites are provisioned, software and firmware upgrades are performed, and users can flex bandwidth from this point. Using a centralized management plan helps to reduce complexity and makes it easier to track applications and their performances from a single zone.

See More: What Is Network Hardware? Definition, Architecture, Challenges, and Best Practices

Organizations are gradually adopting cloud-based services. SD-WAN enables users to access the cloud remotely without burdening the core network with additional traffic to manage and secure. This may promote cost savings for organizations looking to cut down on office space, equipment and rent as employees can work remotely. The need for additional IT experts to manage and secure data traffic is also minimized.

SD-WAN solutions improve cloud applications performance by emphasizing business-critical applications and allowing them to communicate directly to the internet. SD-WAN guarantees quality and optimizes data, followed by directing network traffic along the most efficient routes.

Even with the gradual increase in the popularity of cloud-based resources, organizations still have to wait for weeks or months to set up new WAN circuits or managed service providers (MSPs). A fully managed cloud-first WAN service could offer cloud-based network offerings comparable with other cloud services through orchestration and automation.

This feature would promote quick turn-up of newer locations globally and services bolstering enterprise flexibility. It would also facilitate troubleshooting and increase the visibility of enterprises.

SD-WAN technologies offer predictive analytics enabling IT specialists to navigate potential outages and mitigate any other potential issues. SD-WAN monitors the system in real time and provides data analytics to determine and predict any problems. This ability helps to reduce resolution time for organizational IT troubleshooting, lowering TCO, and maintaining peak performances at all times. This leads to increased productivity in organizations and decreasing costs, as IT experts are not always required to be on-premises. In case a problem arises, they can quickly identify and fix the issue.

See More: How to Get SD-WAN Security Right?

Software-defined wide area network is a crucial enabler for enterprise digital transformation. It is highly extensible so it can integrate new-age security technologies like SASE with existing network infrastructure. It can also simplify IT operations by paving the way for AIOps alongside network management. Thats why it is vital to understand the working and potential benefits of SD-WAN to prepare for your adoption journey.

Did this article fully inform you about the role of SD-WAN in a modern enterprise? Tell us on Facebook, Twitter, and LinkedIn. Wed love to hear from you!

See more here:
Everything You Need to Know About SD-WAN - Spiceworks News and Insights

Read More..