Page 181«..1020..180181182183..190200..»

How Machine Learning Revolutionizes Automation Security with AI-Powered Defense – Automation.com

Summary

Machine learning is sometimes considered a subset of overarching AI. But in the context of digital security, it may be better understood as a driving force, the fuel powering the engine.

The terms AI and machine learning are often used interchangeably by professionals outside the technology, managed IT and cybersecurity trades. But, truth be told, they are separate and distinct tools that can be coupled to power digital defense systems and frustrate hackers.

Artificial iIntelligence has emerged as an almost ubiquitous part of modern life. We experience its presence in everyday household robots and the familiar Alexa voice that always seems to be listening. Practical uses of AI mimic and take human behavior one step further. In cybersecurity, it can deliver 24/7 monitoring, eliminating the need for a weary flesh-and-blood guardian to stand a post.

Machine learning is sometimes considered a subset of overarching AI. But in the context of digital security, it may be better understood as a driving force, the fuel powering the engine. Using programmable algorithms, it recognizes sometimes subtle patterns. This proves useful when deployed to follow the way employees and other legitimate network users navigate systems. Although even discussions regarding AI and machine learning feel redundant, to some degree, they are a powerful one-two punch in terms of automating security decisions.

Integrating AI calls for a comprehensive understanding of mathematics, logical reasoning, cognitive sciencesand a working knowledge of business networks. The professionals who implement AI for security purposes must also possess high-level expertise and protection planning skills. Used as a problem-solving tool, AI can provide real-time alerts and take pre-programmed actions. But it cannot effectively stem the tide of bad actors without support. Enter machine learning.

In this context, machine learning emphasizes software solutions driven by data analysis. Unlike human information processing limitations, machine learning can handle massive swaths of data. What machine learning learns, for lack of a better word, translates into actionable security intel for the overarching AI umbrella.

Some people think about machine learning as a subcategory of AI, which it is. Others comprehend it in a functional way,i.e., two sides to the same coin. But for cybersecurity experts determined to deter, detectand repel threat actors, machine learning is the gasoline that powers AI engines.

Its now essential to leverage machine learning capabilities to develop a so-called intelligent computer that can defend itself, to some degree. Although the relationship between AI and machine learning is diverse and complex, an expert can integrate them into a cybersecurity posture with relative ease. Its simply a matter of repetition and the following steps.

When properly orchestrated and refined to detect user patterns and subtle anomalies, the AI-machine learning relationship helps cybersecurity professionals keep valuable and sensitive digital assets away from prying eyes and greedy digital hands.

First and foremost, its crucial to put AI and machine learning benefits in context. Studies consistently conclude that more than 80% of all cybersecurity failures are caused by human error. Using automated technologies removes many mistake-prone employees and other network users from the equation. Along with minimizing risk, these are benefits of onboarding these automated next-generation technologies.

Improved cybersecurity efficiency. According to the 2023 Global Security Operations Center Study, cybersecurity professionals spend one-third of their workday chasing downfalse positives. This waste of time negatively impacts their ability to respond to legitimate threats, leaving a business at higher than necessary risk. The strategic application of AI and machine learning can be deployed to recognize harmless anomalies and alert a CISO or vCISO only when authentic threats are present.

Increased threat hunting capabilities.Without proactive, automated security measures like MDR (managed detection and response), organizations are too often following an outdated break-and-fix model. Hackers breach systems or deposit malware, and then the IT department spends the remainder of their day, or week, trying to purge the threat and repair the damage. Cybersecurity experts have widely adopted the philosophy that the best defense is a good offense. A thoughtful AI-machine learning strategy can engage in threat hunting without ever needing a coffee break.

Cure business network vulnerabilities.Vulnerability management approaches generally employ technologies that provide proactive automation. They close cybersecurity gaps and cure inherent vulnerabilities by identifying these weaknesses and alerting human decision-makers. Unlike scheduling a routine annual risk assessment, these cutting-edge technologies deliver ongoing analytics and constant vigilance.

Resolve cybersecurity skills gap.Its something of an open secret that there are not enough trained, certified cybersecurity experts to fill corporate positions. Thats one of the reasons why industry leaders tend to outsource managed IT and cybersecurity to third-party firms. Outsourcing helps to onboard the high-level knowledge and skills required to protect valuable digital assets and sensitive information. Without enough cybersecurity experts to safeguard businesses, automation allows the resources available to companies to drill down and identify true threats. Without these advanced technologies being used to bolster network security, its likely the number of debilitating cyberattacks would grow exponentially.

The type of predictive analytics and swift decision-making capabilities this two-prong approach delivers has seemingly endless industry applications. Banking and financial sector organizations can not only use AI and machine learning to repel hackers but also ferret out fraud. Healthcare organizations have a unique opportunity to exceed Health Insurance Portability and Accountability Act (HIPAA) requirements due to the advanced personal identity record protections it affords. Companies conducting business in the global marketplace can also get a leg-up in meeting the EUs General Data Protection Regulation (GDPR) designed to further informational privacy.

Perhaps the greatest benefit organizations garner from AI and machine learning security automation is the ability to detect, respondand expel threat actors and malicious applications. Managed IT cybersecurity experts can help companies close the skills gap by integrating these and other advanced security strategies.

John Funk is a Creative Consultant at SevenAtoms. A lifelong writer and storyteller, he has a passion for tech and cybersecurity. When hes not found enjoying craft beer or playing Dungeons & Dragons, John can be often found spending time with his cats

Check out our free e-newsletters to read more great articles..

Original post:
How Machine Learning Revolutionizes Automation Security with AI-Powered Defense - Automation.com

Read More..

Predicting Chaos With AI: The New Frontier in Autonomous Control – SciTechDaily

Advanced machine learning algorithms have shown potential in efficiently controlling complex systems, promising significant improvements in autonomous technology and digital infrastructure.

Recent research highlights the development of advanced machine learning algorithms capable of controlling complex systems efficiently. These new algorithms, tested on digital twins of chaotic electronic circuits, not only predict and control these systems effectively but also offer significant improvements in power consumption and computational demands.

According to a new research study, systems controlled by next-generation computing algorithms could give rise to better and more efficient machine learning products.

Researchers used machine learning techniques to construct a digital twina virtual replicaof an electronic circuit known for its chaotic behavior. They found that they were successful at predicting how it would behave and using that information to control it.

Many everyday devices, like thermostats and cruise control, utilize linear controllers which use simple rules to direct a system to a desired value. Thermostats, for example, employ such rules to determine how much to heat or cool a space based on the difference between the current and desired temperatures.

Yet because of how straightforward these algorithms are, they struggle to control systems that display complex behavior, like chaos.

As a result, advanced devices like self-driving cars and aircraft often rely on machine learning-based controllers, which use intricate networks to learn the optimal control algorithm needed to best operate. However, these algorithms have significant drawbacks, the most demanding of which is that they can be extremely challenging and computationally expensive to implement.

Now, having access to an efficient digital twin is likely to have a sweeping impact on how scientists develop future autonomous technologies, said Robert Kent, lead author of the study and a graduate student in physics at The Ohio State University.

The problem with most machine learning-based controllers is that they use a lot of energy or power and they take a long time to evaluate, said Kent. Developing traditional controllers for them has also been difficult because chaotic systems are extremely sensitive to small changes.

These issues, he said, are critical in situations where milliseconds can make a difference between life and death, such as when self-driving vehicles must decide to brake to prevent an accident.

The study was published recently in Nature Communications.

Compact enough to fit on an inexpensive computer chip capable of balancing on your fingertip and able to run without an internet connection, the teams digital twin was built to optimize a controllers efficiency and performance, which researchers found resulted in a reduction of power consumption. It achieves this quite easily, mainly because it was trained using a type of machine learning approach called reservoir computing.

The great thing about the machine learning architecture we used is that its very good at learning the behavior of systems that evolve in time, Kent said. Its inspired by how connections spark in the human brain.

Although similarly sized computer chips have been used in devices like smart fridges, according to the study, this novel computing ability makes the new model especially well-equipped to handle dynamic systems such as self-driving vehicles as well as heart monitors, which must be able to quickly adapt to a patients heartbeat.

Big machine learning models have to consume lots of power to crunch data and come out with the right parameters, whereas our model and training is so extremely simple that you could have systems learning on the fly, he said.

To test this theory, researchers directed their model to complete complex control tasks and compared its results to those from previous control techniques. The study revealed that their approach achieved a higher accuracy at the tasks than its linear counterpart and is significantly less computationally complex than a previous machine learning-based controller.

The increase in accuracy was pretty significant in some cases, said Kent. Though the outcome showed that their algorithm does require more energy than a linear controller to operate, this tradeoff means that when it is powered up, the teams model lasts longer and is considerably more efficient than current machine learning-based controllers on the market.

People will find good use out of it just based on how efficient it is, Kent said. You can implement it on pretty much any platform and its very simple to understand. The algorithm was recently made available to scientists.

Outside of inspiring potential advances in engineering, theres also an equally important economic and environmental incentive for creating more power-friendly algorithms, said Kent.

As society becomes more dependent on computers and AI for nearly all aspects of daily life, demand for data centers is soaring, leading many experts to worry over digital systems enormous power appetite and what future industries will need to do to keep up with it.

And because building these data centers as well as large-scale computing experiments can generate a large carbon footprint, scientists are looking for ways to curb carbon emissions from this technology.

To advance their results, future work will likely be steered toward training the model to explore other applications like quantum information processing, Kent said. In the meantime, he expects that these new elements will reach far into the scientific community.

Not enough people know about these types of algorithms in the industry and engineering, and one of the big goals of this project is to get more people to learn about them, said Kent. This work is a great first step toward reaching that potential.

Reference: Controlling chaos using edge computing hardware by Robert M. Kent, Wendson A. S. Barbosa and Daniel J. Gauthier, 8 May 2024, Nature Communications. DOI: 10.1038/s41467-024-48133-3

This study was supported by the U.S. Air Forces Office of Scientific Research. Other Ohio State co-authors include Wendson A.S. Barbosa and Daniel J. Gauthier.

Read this article:
Predicting Chaos With AI: The New Frontier in Autonomous Control - SciTechDaily

Read More..

What is Overfitting in Machine Learning? – TechTarget

What is overfitting in machine learning?

Overfitting in machine learning occurs when a model excessively fits the training data, capturing both relevant patterns and inconsequential noise, resulting in inaccurate predictions of new data. Simpler models are less susceptible to capturing noise or irrelevant patterns, which increases the likelihood of generalizing effectively to unseen data.

For example, imagine a company using machine learning to select a few candidates to interview from a large set of resumes based solely on the resume content. The model can consider relevant factors, such as education, experience and skills. However, it overly fixates on font choices, rejecting highly qualified applicants for using Helvetica rather than Times New Roman.

Most factors contributing to overfitting can be found in the model, data or training methods. If a machine learning model is too complex, it memorizes training data closely rather than learning the relevant underlying pattern.

If the training data contains too much noise or if the training data set is too small, the model does not have enough good data to distinguish between signal and noise. If a model trains for too long -- even with optimized data and model -- it starts learning noises, reducing performance the longer it trains. Another potential pitfall is repeatedly testing a model on the same validation data, leading to implicit overfitting to a stale data set.

Underfitting is the opposite of overfitting in that the machine learning model doesn't fit the training data closely enough, thus failing to learn the pattern in the data. Underfitting can be caused by using a too-simple model for a complex problem.

In the above example where a company evaluates resumes with machine learning, an underfit model is too simplistic and fails to capture the relationship between resume contents and job requirements. For example, the underfit model may select all resumes containing specific keywords, such as Java and JavaScript, even if only JavaScript skills are required for the position. The learning model focuses too much on just the word Java, and skills are completely different for JavaScript. It then fails to detect suitable candidates in the training and new data.

One sign of an overfit model is when it performs well on the training data but poorly on new data. However, there are other methods to test the model's performance more effectively.

K-fold cross-validation is an essential tool in assessing the performance of a model. The training data is randomly split into K subsets of equal size, referred to as folds. One fold is reserved for final validation, and the model is trained on the remaining folds. The model then validates the remaining fold and calculates the performance metrics. This process is performed K times, using a different fold as the validation fold during each iteration. The performance metrics are then averaged to get a single overall performance measure for the model.

Technically, two learning curves are generated for one analysis. One learning curve is generated on the training data set to evaluate how the model is learning, and the other curve is generated on the validation set, which evaluates how well the model is generalizing to new data. Then, the learning curves plot performance metrics, such as error or accuracy, against the number of training data points.

As the data set increases, patterns in the performance metrics begin to emerge. When the training error and the validation error plateau, that indicates that adding more data does not meaningfully change the fit. A learning curve for an underfitting model trends close and high. A learning curve for an overfitting model contains lower error values, but there is a gap between the validation and training results, indicating the model is underperforming on the validation data.

Organizations must improve the model and data to prevent overfitting in machine learning.

Here are some ways to refine and optimize models to decrease the risk of overfitting in machine learning.

Understanding the problem and selecting the appropriate machine learning algorithm are crucial at the beginning of a project. While cost evaluation and performance optimization are important, beginners should start with the simplest algorithm to avoid complications and improve generalization. Simple algorithms, such as k-means clustering or k-nearest neighbors, offer more straightforward interpretation and debugging.

In machine learning, features are the individual measurable properties or characteristics of the data used as inputs for training a model. Feature selection identifies which features are the most useful for the model to learn, which reduces the model's dimensionality.

The risk of overfitting increases with the complexity of the model. Regularization is putting constraints on that model during training to avoid complications.

During the training process, the weights of the machine learning model -- or coefficients -- are adjusted to minimize the loss function, which represents the difference between the predicted outputs of a model and the actual target values. The loss function can be represented by the following:

minL()

Regularization adds a new term || || to the loss function and then solves for the set of weights that minimizes the output.

minL() + || ||

There are different ways to do this, depending on the model type.

Ridge regression is a linear regression technique that adds the sum of the squares of the weights to the loss function during training, aiming to prevent overfitting by keeping the coefficients as small as possible without reducing them to zero.

Least absolute shrinkage and selection operator (LASSO) regression adds the sum of the absolute values of the model's weights to the loss function. This automatically performs feature selection by eliminating the weights of the least important features.

Elastic net regression adds a regularization term that is the sum of ridge and LASSO regression, introducing the hyperparameter , which controls the balance between ridge regression ( = 1) and LASSO regression ( = 0) and determines how much automatic feature selection is done on the model.

This method works for iterative learning algorithms, such as gradient descent. A model learns with more data. As the model learns and more data is provided, the prediction error on both the training and validation sets goes down. When too much data is added, overfitting begins to occur, and the error rate on the validation set starts to increase. Early stopping is a form of regularization that stops model training once the validation data's error rate reaches its minimum or when a plateau is detected.

Dropout is a regularization technique used in deep neural networks. Each neuron has a probability -- known as the dropout rate -- that it is ignored or "dropped out" at each data point in the training process. During training, each neuron is forced to adapt to the occasional absence of its neighbors and rely more on its inputs. This leads to a stronger, more resilient network with reduced susceptibility to minor input variations, which minimizes the risk of the network mistaking noise for meaningful data. Adjusting the dropout rate can address overfitting by increasing it or underfitting by decreasing it.

Ensemble methods aggregate predictions from multiple models toward the end of a machine learning project, which reduces both bias and variance, leading to more comprehensive predictions. An example of an ensemble method is random forests, which build multiple decision trees during training. Each tree is trained on a random subset of the data and features. During prediction, the random forest aggregates the predictions of individual trees to produce a final prediction, often achieving high accuracy and robustness against overfitting.

The data is just as vital as the model, so organizations can do the following to improve data.

A large training data set provides a more comprehensive representation of the underlying problem, enabling the model to learn the true patterns and dependencies rather than memorizing specific instances.

Data augmentation helps reduce overfitting by copying one training data instance and then altering it slightly so it is learnable to the model but not detectable by a human. The model has more opportunities to learn the desired pattern, while increasing its tolerance for different environments. Data augmentation is especially helpful in balancing a data set because it includes more underrepresented data, helping to improve the model's ability to generalize across diverse scenarios and avoid biases in the training data.

Original post:
What is Overfitting in Machine Learning? - TechTarget

Read More..

Patient classification and attribute assessment based on machine learning techniques in the qualification process for … – Nature.com

Aim

An adrenal incidentaloma (AI) is an asymptomatic adrenal mass that is recognized incidentally during imaging examinations and is not associated with suspected adrenal pathology1,2. Incidental discovery of adrenal masses has increased recently due to wider application and technical improvement of abdominal imaging procedures, with a prevalence of approximately 0.26.9% in radiological studies1,3,4,5. A comprehensive hormonal evaluation of newly diagnosed adrenal masses at their initial presentation was recommended by the European Society of Endocrinology in 20166.

Patients should be referred for adrenalectomy with clinically significant hormone excess, radiological findings suspicious for malignancy, signs of local invasion, and when the tumour is greater than 5cm6. Underlying comorbidities, advanced age, and Hispanic ethnicity were associated with more frequent postoperative complications. Therefore, the coexistence of heart failure or respiratory failure should always be considered before qualifying for surgical treatment of adrenal tumours7.

The primary objective of this study was to compare several machine learning (ML) techniques in a qualification for adrenalectomy and choose the most accurate algorithm as a valuable adjunct tool for doctors to simplify making therapeutic decisions by using the most innovative and modern methods. To the best of our knowledge, this study is the firstattempt to apply ML techniques to qualify for the surgical treatment of AI using both the results of diagnostic tests and computed tomography (CT) image features. Preliminary results of this study were presented in a poster session at the European Congress of Endocrinology8.

In the literature, most studies apply computer vision techniques to recognize the type of tumour based on CT images9,10,11,12,13,14,15,16. In one study, the authors evaluated ML-based texture analysis of unenhanced CT images in differentiating pheochromocytoma from lipid-poor adenoma in adrenal incidentaloma10. The textural features were computed using the MaZda software package, and two classification methods were used: multivariable logistic regression (accuracy of 94%) and number of positive features by comparison to cut-off values (accuracy of 85%). The results were encouraging; however, decision classes were unbalanced and the accuracy values were computed on the test set. Therefore, they were biased estimators. In another study, the authors applied a multivariable logistic regression model with 11 selected textural features computed using MaZda software11. The cut-off point obtained using the eceiver operating characteristic (ROC) curve applied to the expression obtained from logistic regression resulted in a sensitivity of 93% and 100% specificity. Again, these results were obtained using the same set used to train the model. In another study performed by Li et al., ML models were used to differentiate pheochromocytoma from lipid-poor adenoma based on the radiologists description of unenhanced and enhanced CT images9. The authors used three classifiers: multivariate logistic regression, SVM and random forest. As a result, two separate models based on multivariable logistic regression were proposed, each using three CT features: M1 with preenhanced CT value, shape, and necrosis/cystic changes (accuracy of 86%) and M2 using only preenhanced CT features: CT value, shape, and homogeneity (accuracy of 83%). Elmohr et al. used the ML algorithm to differentiate large adrenal adenomas from carcinomas on contrast-enhanced computed tomography, and its diagnostic accuracy for carcinomas was higher than that of radiologists13. Other studies have evaluated the accuracy of ML-based texture analysis of unenhanced CT images in differentiating lipid-poor adenoma from pheochromocytoma, with performance accuracy ranging from 85 to 89%10,14.

The literature also includes papers applying ML techniques to magnetic resonance imaging (MRI) data. An example of such work is a study where the authors utilized logistic regression with the least absolute shrinkage and selection operator (LASSO) to select MRI image features and distinguish between non-functional AI and adrenal Cushings syndrome17.

In studies involving a large number of features (e.g.: software packages such as MaZdA can calculate several hundred texture parameters for images), dimensionality reduction is required. Techniques commonly used (or combinations of them) are: LASSO with regression18,19,20,21, elimination of correlated features9,21 or those with low intraclass correlation (ICC)18, training of classifiers for subsets of features and selection of subsets with the highest classifier accuracy9, elimination of features with p-values above the accepted error rate for coefficients in regression models, use of feature discrimination power calculated using the ROC curve for each feature separately10.

Artificial neural networks (ANN) are flexible and powerful ML techniques that have evolved from the idea of simulating the human brain, however their successful application usually requires datasets much larger that other classification methods17,18,19.

To improve the quality of patient care, recent studies have been conducted in several different sectors using modern techniques. There are two types of ML-based models: current-condition identification and forward prediction20. In Table 1, we have summarized studies concerning the utilization of ML techniques in AI management.

Read more from the original source:
Patient classification and attribute assessment based on machine learning techniques in the qualification process for ... - Nature.com

Read More..

Grid Dynamics Achieves the Amazon Web Services Machine Learning Competency – Chronicle-Tribune

State Alabama Alaska Arizona Arkansas California Colorado Connecticut Delaware Florida Georgia Hawaii Idaho Illinois Indiana Iowa Kansas Kentucky Louisiana Maine Maryland Massachusetts Michigan Minnesota Mississippi Missouri Montana Nebraska Nevada New Hampshire New Jersey New Mexico New York North Carolina North Dakota Ohio Oklahoma Oregon Pennsylvania Rhode Island South Carolina South Dakota Tennessee Texas Utah Vermont Virginia Washington Washington D.C. West Virginia Wisconsin Wyoming Puerto Rico US Virgin Islands Armed Forces Americas Armed Forces Pacific Armed Forces Europe Northern Mariana Islands Marshall Islands American Samoa Federated States of Micronesia Guam Palau Alberta, Canada British Columbia, Canada Manitoba, Canada New Brunswick, Canada Newfoundland, Canada Nova Scotia, Canada Northwest Territories, Canada Nunavut, Canada Ontario, Canada Prince Edward Island, Canada Quebec, Canada Saskatchewan, Canada Yukon Territory, Canada

Zip Code

Country United States of America US Virgin Islands United States Minor Outlying Islands Canada Mexico, United Mexican States Bahamas, Commonwealth of the Cuba, Republic of Dominican Republic Haiti, Republic of Jamaica Afghanistan Albania, People's Socialist Republic of Algeria, People's Democratic Republic of American Samoa Andorra, Principality of Angola, Republic of Anguilla Antarctica (the territory South of 60 deg S) Antigua and Barbuda Argentina, Argentine Republic Armenia Aruba Australia, Commonwealth of Austria, Republic of Azerbaijan, Republic of Bahrain, Kingdom of Bangladesh, People's Republic of Barbados Belarus Belgium, Kingdom of Belize Benin, People's Republic of Bermuda Bhutan, Kingdom of Bolivia, Republic of Bosnia and Herzegovina Botswana, Republic of Bouvet Island (Bouvetoya) Brazil, Federative Republic of British Indian Ocean Territory (Chagos Archipelago) British Virgin Islands Brunei Darussalam Bulgaria, People's Republic of Burkina Faso Burundi, Republic of Cambodia, Kingdom of Cameroon, United Republic of Cape Verde, Republic of Cayman Islands Central African Republic Chad, Republic of Chile, Republic of China, People's Republic of Christmas Island Cocos (Keeling) Islands Colombia, Republic of Comoros, Union of the Congo, Democratic Republic of Congo, People's Republic of Cook Islands Costa Rica, Republic of Cote D'Ivoire, Ivory Coast, Republic of the Cyprus, Republic of Czech Republic Denmark, Kingdom of Djibouti, Republic of Dominica, Commonwealth of Ecuador, Republic of Egypt, Arab Republic of El Salvador, Republic of Equatorial Guinea, Republic of Eritrea Estonia Ethiopia Faeroe Islands Falkland Islands (Malvinas) Fiji, Republic of the Fiji Islands Finland, Republic of France, French Republic French Guiana French Polynesia French Southern Territories Gabon, Gabonese Republic Gambia, Republic of the Georgia Germany Ghana, Republic of Gibraltar Greece, Hellenic Republic Greenland Grenada Guadaloupe Guam Guatemala, Republic of Guinea, Revolutionary People's Rep'c of Guinea-Bissau, Republic of Guyana, Republic of Heard and McDonald Islands Holy See (Vatican City State) Honduras, Republic of Hong Kong, Special Administrative Region of China Hrvatska (Croatia) Hungary, Hungarian People's Republic Iceland, Republic of India, Republic of Indonesia, Republic of Iran, Islamic Republic of Iraq, Republic of Ireland Israel, State of Italy, Italian Republic Japan Jordan, Hashemite Kingdom of Kazakhstan, Republic of Kenya, Republic of Kiribati, Republic of Korea, Democratic People's Republic of Korea, Republic of Kuwait, State of Kyrgyz Republic Lao People's Democratic Republic Latvia Lebanon, Lebanese Republic Lesotho, Kingdom of Liberia, Republic of Libyan Arab Jamahiriya Liechtenstein, Principality of Lithuania Luxembourg, Grand Duchy of Macao, Special Administrative Region of China Macedonia, the former Yugoslav Republic of Madagascar, Republic of Malawi, Republic of Malaysia Maldives, Republic of Mali, Republic of Malta, Republic of Marshall Islands Martinique Mauritania, Islamic Republic of Mauritius Mayotte Micronesia, Federated States of Moldova, Republic of Monaco, Principality of Mongolia, Mongolian People's Republic Montserrat Morocco, Kingdom of Mozambique, People's Republic of Myanmar Namibia Nauru, Republic of Nepal, Kingdom of Netherlands Antilles Netherlands, Kingdom of the New Caledonia New Zealand Nicaragua, Republic of Niger, Republic of the Nigeria, Federal Republic of Niue, Republic of Norfolk Island Northern Mariana Islands Norway, Kingdom of Oman, Sultanate of Pakistan, Islamic Republic of Palau Palestinian Territory, Occupied Panama, Republic of Papua New Guinea Paraguay, Republic of Peru, Republic of Philippines, Republic of the Pitcairn Island Poland, Polish People's Republic Portugal, Portuguese Republic Puerto Rico Qatar, State of Reunion Romania, Socialist Republic of Russian Federation Rwanda, Rwandese Republic Samoa, Independent State of San Marino, Republic of Sao Tome and Principe, Democratic Republic of Saudi Arabia, Kingdom of Senegal, Republic of Serbia and Montenegro Seychelles, Republic of Sierra Leone, Republic of Singapore, Republic of Slovakia (Slovak Republic) Slovenia Solomon Islands Somalia, Somali Republic South Africa, Republic of South Georgia and the South Sandwich Islands Spain, Spanish State Sri Lanka, Democratic Socialist Republic of St. Helena St. Kitts and Nevis St. Lucia St. Pierre and Miquelon St. Vincent and the Grenadines Sudan, Democratic Republic of the Suriname, Republic of Svalbard & Jan Mayen Islands Swaziland, Kingdom of Sweden, Kingdom of Switzerland, Swiss Confederation Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand, Kingdom of Timor-Leste, Democratic Republic of Togo, Togolese Republic Tokelau (Tokelau Islands) Tonga, Kingdom of Trinidad and Tobago, Republic of Tunisia, Republic of Turkey, Republic of Turkmenistan Turks and Caicos Islands Tuvalu Uganda, Republic of Ukraine United Arab Emirates United Kingdom of Great Britain & N. Ireland Uruguay, Eastern Republic of Uzbekistan Vanuatu Venezuela, Bolivarian Republic of Viet Nam, Socialist Republic of Wallis and Futuna Islands Western Sahara Yemen Zambia, Republic of Zimbabwe

Go here to read the rest:
Grid Dynamics Achieves the Amazon Web Services Machine Learning Competency - Chronicle-Tribune

Read More..

Machine Learning Revolutionizes Cybersecurity; Detecting, Preventing Threats – TechiExpert.com

Cybersecurity is highly critical and it is assumed the threats to continue evolving and growing. Organizations are turning to advanced technologies like artificial intelligence (AI) and machine learning (ML) to combat the threats. The technologies are revolutionizing how we detect as well as prevent cyber attacks. The technologies are offering innovative solutions and these can enhance our cybersecurity defenses.

AI and ML are powerful tools. These have the capabilities to fight against cyber threats due to their analyzing capabilities of vast amounts of data quickly as well as accurately. The two technologies can detect patterns and anomalies that might indicate a cyber attack. Behavioral analysis is one of the ways to serve the purpose. The tools learn the normal behavior patterns of users and devices within a network, the so-called User and Entity Behavior Analytics (UEBA).

One another way is through network traffic analysis. AI and ML monitor network traffic for unusual patterns like unexpected data transfers or communication. The method helps in identifying potential threats before significant damage takes place. Moreover, deception technology can trick attackers into revealing themselves.

AI and ML also automate defensive responses to detected threats. ML algorithms are trained on large datasets of malware and hence can identify as well as neutralize malware variants with high accuracy. AI systems can analyze emails and websites to detect phishing attempts. It can block the malware variants. Modern intrusion detection systems (IDS) use AI to analyze network traffic.

Security Orchestration, Automation and Response (SOAR) platforms integrate AI and ML to automate incident response workflows. These can automatically isolate infected systems, block malicious traffic and initiate other counter-measures. Hence, the response time is quick and reduces the burden on human analysts.

Automated patch management analyzes vulnerability data and prioritize patching efforts based on risk levels. AI ensures that critical vulnerabilities are addressed promptly and simultaneously reduces the opportunity for attackers.

Read the original post:
Machine Learning Revolutionizes Cybersecurity; Detecting, Preventing Threats - TechiExpert.com

Read More..

USF Computer Science & Engineering graduate student takes top prize in DoD’s national cyber competition – University of South Florida

The Department of Defense challenged the countrys top cybersecurity minds to compete in a skills competition and USF Computer Science and Engineering student and CyberHerd team captain Waseem Albaba came out on top during the spring 2024 event.

The DoDs Cyber Sentinel Skills Challenge invites any U.S. citizen 18 and older to apply for entry. Around 3,000 competitors, including seasoned industry veterans and university students, logged on for the 8-hour capture-the-flag (CTF) contest.

Taking a top prize with a cash reward between $500 and $5,000 puts winners in an elite class of cybersecurity pros.

For Albaba, its proof that educational opportunities paired with a dedication to the craft are a recipe for success.

"I am honored to have won the DOD Cyber Sentinel CTF and am deeply grateful for the support of my coaches and my school, Albaba said, adding that a little luck never hurts. The competition's focus on topics in which I tend to be good at played a significant role in my success.

During the competition, participants are given real-world challenges that require in-demand skillsets. The DoD said the tests measure understanding of technical, analytical and strategic abilities in forensics, web security, reverse engineering, reconnaissance and malware analysis.

The DoD says the competition is designed to identify individuals interested in pursuing a cybersecurity career with the DoD, but it also highlights the agencys focus on ongoing improvement of human cybersecurity skills.

While pursuing a masters degree in computer science from the USF College of Engineering, Albaba is an offensive security engineer at Trace3. His plans do not currently include employment at the DoD, but he sees the Departments competitions as an ongoing opportunity to grow.

I eagerly anticipate participating in more competitions like this in the future, he said.

Another part of Albabas ongoing growth in cybersecurity skills is his leadership role in the CyberHerd competition team. Team coach and faculty member Marbin Pazos Revilla pointed out the everchanging landscape of cyber defense.

"The cybersecurity landscape has evolved very rapidly over the past several years, and the challenges we face as a nation and globally demand critical thinking and a broad set of cybersecurity skills, Pazos Revilla said. The DoD Cyber Sentinel Challenge demanded these skills as a clear representation of real-world scenarios, and I extremely proud that our USF CyberHerd Team Captain Waseem Albaba took home the First Place Award. This is also a very encouraging sign reflecting on the strategic decisions and investments our institution has placed in cybersecurity education."

Team faculty advisor and faculty member Sriram Chellapan added that the formation of the CyberHerd team was a response to the growing need for graduates to be seen as skilled cybersecurity professionals.

The CyberHerd team was formed in 2023 August, specifically geared to improve the competitive mindset of our students to thrive in a high-risk high-reward field, impacting National and Global security. This accomplishment from Waseem demonstrates our strong commitment towards student success, Chellapan said.

View post:

USF Computer Science & Engineering graduate student takes top prize in DoD's national cyber competition - University of South Florida

Read More..

UCF Hosts ‘Battle of the Brains’ Programming Championship, Seeking 13th Straight Trip to World Finals – UCF

UCFs Apocalypse Attack students will be among 50 teams from the U.S. and Canada competing May 27 in the nations most prestigious computer programming contest, showcasing their exceptional talents to leading employers.

The UCF College of Engineering and Computer Science will host the International Collegiate Programming Contests (ICPC) North America Championship. Known as the Battle of Brains, the five-hour contest involves a race to solve the most brain teasers and logic problems correctly over five hours. Each problem is solved by writing a computer program that generates the correct answer.

UCF will compete alongside the most elite teams on the continent, including Massachusetts Institute of Technology, Harvard University, Columbia University and Stanford University.

The top 16 teams will move on to the world finals, to be held in September in Astana, Kazakhstan.

The worlds most prestigious collegiate competition builds students problem-solving skills, which are valuable to those seeking a career in software development, data science and research, and related fields.

UCFs team will be seeking to advance to the world finals for the 13th consecutive year. Team advisor and Professor of Computer Science Ali Orooji says the team is excited to compete at home, and that he is optimistic about UCFs chances of placing exceptionally well at the championship.

We have a strong team, and they definitely have the talent to finish in top five, Orooji says. It is, however, a five-hour contest with many good teams so the smallest mistakes will make a difference. We are very proud of UCFs consistent record over the last 40 years and appreciate the dedication and hard work of our students.

The NAC and NAPC are sponsored by the National Security Agency, Jane Street, Citadel, Jump Training and Jet Brains. Each sponsor will have representatives on-site at a career fair for the contest participants, giving the students the opportunity to meet with them to discuss future jobs and internships.

It is our pleasure to showcase UCF to so many great institutions in North America, says Michael Georgiopoulos, dean of the College of Engineering and Computer Science. It is also our pleasure to give the opportunity to students and coaches to interact with each other, get to know each other better and exchange ideas of how they can expand the passion and knowledge about computing to a wider audience.

This years NAC is unique in that is also offers a programming camp three days before the championship, offered to the attending teams as practice time before the big event. The North America Programming Camp (NAPC) features six world-class trainers who have coached world finalists or competed on the world stage themselves.

In addition to showcasing their technical talents, teams must display a slew of other strengths to make it through to the next level of competition, including smart time management, grace under pressure and successful collaboration in their teams of three, sharing one computer.

All are champions in their own right, having first been selected to compete from their universities.They bested over 1,000 teams in ICPC competitions throughout North America to advance to the NAC, competing to advance to the ICPC World Finals, the top 1% of nearly 20,000 teams competing globally, says ICPC Executive Director Bill Poucher. How good are they? They are extraordinary.

The hometown team UCF Apocalypse Attack, coached by computer science lecturer Travis Meade, is comprises Tyler Marks 24, who earned bachelors degrees from UCF in both computer science and mechanical engineering; computer science alum Andy Phan 21 23MS, who is working on a second masters degree from UCF in mathematics; and computer science undergraduate Sachin Sivakumar. The team placed first in the Southeast Regional Programming Content to earn their spot in the North America Championship.

Damla Turgut, chair of the UCF Department of Computer Science, says that the competition shines a spotlight on the impressive technical skills students will demonstrate as they compete for a spot in world finals.

We are excited to host the top student programmers from across North America at the UCF campus and to cheer on our own team in the competition, Turgut says. This event will showcase the exceptional talents and skills of our graduates to leading tech employers.

See original here:

UCF Hosts 'Battle of the Brains' Programming Championship, Seeking 13th Straight Trip to World Finals - UCF

Read More..

VCU College of Engineering to offer new B.A. degree in computer science in fall 2024 – VCU News

By Becca Antler

The Virginia Commonwealth UniversityCollege of Engineeringis introducing a Bachelor of Arts degree in computer science, available to students in the upcoming 2024-25 school year. Distinct from the colleges Bachelor of Science degree, the new program fills a crucial gap by allowing students to combine computer science education with other specialized fields of study.

The proposal for the Bachelor of Arts degree cited recent statistics showing that the majority of information technology jobs are now concentrated in nontech industries, highlighting the importance of a pathway for students from all backgrounds to gain proficiency in computer science.

Through a blend of theoretical knowledge and hands-on experience, the B.A. program will give students expertise in client computing needs assessment, computing system design, coding, testing and system documentation generation. Graduates will be prepared to work as entry-level computer programmers, computer support specialists, computer systems analysts, software developers, quality assurance analysts, software testers, web developers and digital designers.

VCUs current offering, an ABET-accredited Bachelor of Science degree, provides a concentrated curriculum tailored to students seeking advanced study in highly specialized areas of computer science. Alternatively, the new B.A. intends to provide an interdisciplinary pathway to computer science that promotes collaboration between diverse industries, meeting the growing demand for IT experience.

As the technological landscape shifts, VCU is committed to providing students with the skills and knowledge necessary to excel in an ever-evolving industry.

We created this degree program to enable students in any major to attain a degree in computer science, saidCaroline Budwell, Ph.D., associate professor and undergraduate director in theDepartment of Computer Science. We recognize the value of a multidisciplinary program of study and how important computing knowledge is in every industry.

Subscribe to VCU News at newsletter.vcu.edu and receive a selection of stories, videos, photos, news clips and event listings in your inbox.

Go here to read the rest:

VCU College of Engineering to offer new B.A. degree in computer science in fall 2024 - VCU News

Read More..

Mizzou computer science students design fitness web application for senior capstone project – University of Missouri College of Engineering

May 21, 2024

One group of computer science students set out to develop a new fitness goal tracking web application.

When brainstorming for this project, we discovered in our group that we all had different forms of working out, Allison Drainer said. I do rock climbing, another member does strength training, two do cardio and another member bikes. We wanted to make sure our workout application could accommodate the different muscle groups needed for these exercises.

The goal for the application was to eliminate the need for a personal trainer or gym access, which can be cost prohibitive.

The application uses an artificial intelligence (AI) algorithm to dynamically recommend workouts based on your preferences and help you stay fit based on your lifestyle, Gavin Boley said.

To create their web application, the team drew on knowledge from multiple classes needed for a computer science degree.

Mizzou set us up well with classes to learn the required skills and information, Forrest Pritt said. A lot of us took web development and design. We took algorithm classes to figure out how to make the back end work with the front end. We took a database class to make sure that all the user data and exercise could be stored.

Mizzou set us up to take this project and apply all the principles we learned to a career, he said.

The final application, while not publicly available, has the capabilities of a full website, including an account creation page, login page and a homepage that takes users to the various tools available on the site.

Users can view their progress tracking to see how the different workouts are helping them meet their goals, Bina Gallagher said. Users can create their own goals, enter workouts theyve done on their own time and enter profile information changes.

The application also achieves the groups goal of reducing the need for users to hire personal trainers.

Users can be given customized workouts based on the information they provide on the website and its backend machine learning algorithms, Gallagher said.

Learn more about computer science at Mizzou!

Read about other capstone projects here.

Read more:

Mizzou computer science students design fitness web application for senior capstone project - University of Missouri College of Engineering

Read More..