Page 16«..10..15161718..3040..»

Top Deep Learning Interview Questions and Answers for 2024 – Simplilearn

The demand for Deep Learning has grown over the years and its applications are being used in every business sector. Companies are now on the lookout for skilled professionals who can use deep learning and machine learning techniques to build models that can mimic human behavior. As per indeed, the average salary for a deep learning engineer in the United States is $133,580 per annum. In this tutorial, you will learn the top 45 Deep Learning interview questions that are frequently asked.

Check out some of the frequently asked deep learning interview questions below:

If you are going for a deep learning interview, you definitely know what exactly deep learning is. However, with this question the interviewee expects you to give an in-detail answer, with an example.Deep Learning involves taking large volumes of structured or unstructured data and using complex algorithms to train neural networks. It performs complex operations to extract hidden patterns and features (for instance, distinguishing the image of a cat from that of a dog).

Neural Networks replicate the way humans learn, inspired by how the neurons in our brains fire, only much simpler.

The most common Neural Networks consist of three network layers:

Each sheet contains neurons called nodes, performing various operations. Neural Networks are used in deep learning algorithms like CNN, RNN, GAN, etc.

As in Neural Networks, MLPs have an input layer, a hidden layer, and an output layer. It has the samestructure as a single layer perceptron with one or more hidden layers. A single layer perceptron can classify only linear separable classes with binary output (0,1), but MLP can classify nonlinear classes.

Except for the input layer, each node in the other layers uses a nonlinear activation function. This means the input layers, the data coming in, and the activation function is based upon all nodes and weights being added together, producing the output. MLP uses a supervised learning method called backpropagation. In backpropagation, the neural network calculates the error with the help of cost function. It propagates this error backward from where it came (adjusts the weights to train the model more accurately).

The process of standardizing and reforming data is called Data Normalization. Its a pre-processing step to eliminate data redundancy. Often, data comes in, and you get the same information in different formats. In these cases, you should rescale values to fit into a particular range, achieving better convergence.

One of the most basic Deep Learning models is a Boltzmann Machine, resembling a simplified version of the Multi-Layer Perceptron. This model features a visible input layer and a hidden layer -- just a two-layer neural net that makes stochastic decisions as to whether a neuron should be on or off. Nodes are connected across layers, but no two nodes of the same layer are connected.

At the most basic level, an activation function decides whether a neuron should be fired or not. It accepts the weighted sum of the inputs and bias as input to any activation function. Step function, Sigmoid, ReLU, Tanh, and Softmax are examples of activation functions.

Also referred to as loss or error, cost function is a measure to evaluate how good your models performance is. Its used to compute the error of the output layer during backpropagation. We push that error backward through the neural network and use that during the different training functions.

Gradient Descent is an optimal algorithm to minimize the cost function or to minimize an error. The aim is to find the local-global minima of a function. This determines the direction the model should take to reduce the error.

This is one of the most frequently asked deep learning interview questions. Backpropagation is a technique to improve the performance of the network. It backpropagates the error and updates the weights to reduce the error.

In this deep learning interview question, the interviewee expects you to give a detailed answer.

A Feedforward Neural Network signals travel in one direction from input to output. There are no feedback loops; the network considers only the current input. It cannot memorize previous inputs (e.g., CNN).

A Recurrent Neural Networks signals travel in both directions, creating a looped network. It considers the current input with the previously received inputs for generating the output of a layer and can memorize past data due to its internal memory.

The RNN can be used for sentiment analysis, text mining, and image captioning. Recurrent Neural Networks can also address time series problems such as predicting the prices of stocks in a month or quarter.

Softmax is an activation function that generates the output between zero and one. It divides each output, such that the total sum of the outputs is equal to one. Softmax is often used for output layers.

ReLU (or Rectified Linear Unit) is the most widely used activation function. It gives an output of X if X is positive and zeros otherwise. ReLU is often used for hidden layers.

This is another frequently asked deep learning interview question. With neural networks, youre usually working with hyperparameters once the data is formatted correctly. A hyperparameter is a parameter whose value is set before the learning process begins. It determines how a network is trained and the structure of the network (such as the number of hidden units, the learning rate, epochs, etc.).

When your learning rate is too low, training of the model will progress very slowly as we are making minimal updates to the weights. It will take many updates before reaching the minimum point.

If the learning rate is set too high, this causes undesirable divergent behavior to the loss function due to drastic updates in weights. It may fail to converge (model can give a good output) or even diverge (data is too chaotic for the network to train).

Dropout is a technique of dropping out hidden and visible units of a network randomly to prevent overfitting of data (typically dropping 20 percent of the nodes). It doubles the number of iterations needed to converge the network.

Batch normalization is the technique to improve the performance and stability of neural networks by normalizing the inputs in every layer so that they have mean output activation of zero and standard deviation of one.

The next step on this top Deep Learning interview questions and answers blog will be to discuss intermediate questions.

Batch Gradient Descent

Stochastic Gradient Descent

The batch gradient computes the gradient using the entire dataset.

It takes time to converge because the volume of data is huge, and weights update slowly.

The stochastic gradient computes the gradient using a single sample.

It converges much faster than the batch gradient because it updates weight more frequently.

Overfitting occurs when the model learns the details and noise in the training data to the degree that it adversely impacts the execution of the model on new information. It is more likely to occur with nonlinear models that have more flexibility when learning a target function. An example would be if a model is looking at cars and trucks, but only recognizes trucks that have a specific box shape. It might not be able to notice a flatbed truck because there's only a particular kind of truck it saw in training. The model performs well on training data, but not in the real world.

Underfitting alludes to a model that is neither well-trained on data nor can generalize to new information. This usually happens when there is less and incorrect data to train a model. Underfitting has both poor performance and accuracy.

To combat overfitting and underfitting, you can resample the data to estimate the model accuracy (k-fold cross-validation) and by having a validation dataset to evaluate the model.

There are two methods here: we can either initialize the weights to zero or assign them randomly.

Initializing all weights to 0: This makes your model similar to a linear model. All the neurons and every layer perform the same operation, giving the same output and making the deep net useless.

Initializing all weights randomly: Here, the weights are assigned randomly by initializing them very close to 0. It gives better accuracy to the model since every neuron performs different computations. This is the most commonly used method.

There are four layers in CNN:

Pooling is used to reduce the spatial dimensions of a CNN. It performs down-sampling operations to reduce the dimensionality and creates a pooled feature map by sliding a filter matrix over the input matrix.

Long-Short-Term Memory (LSTM) is a special kind of recurrent neural network capable of learning long-term dependencies, remembering information for long periods as its default behavior. There are three steps in an LSTM network:

While training an RNN, your slope can become either too small or too large; this makes the training difficult. When the slope is too small, the problem is known as a Vanishing Gradient. When the slope tends to grow exponentially instead of decaying, its referred to as an Exploding Gradient. Gradient problems lead to long training times, poor performance, and low accuracy.

Tensorflow provides both C++ and Python APIs, making it easier to work on and has a faster compilation time compared to other Deep Learning libraries like Keras and Torch. Tensorflow supports both CPU and GPU computing devices.

This is another most frequently asked deep learning interview question. A tensor is a mathematical object represented as arrays of higher dimensions. These arrays of data with different dimensions and ranks fed as input to the neural network are called Tensors.

Constants - Constants are parameters whose value does not change. To define a constant we use tf.constant() command. For example:

a = tf.constant(2.0,tf.float32)

b = tf.constant(3.0)

Print(a, b)

Variables - Variables allow us to add new trainable parameters to graph. To define a variable, we use the tf.Variable() command and initialize them before running the graph in a session. An example:

W = tf.Variable([.3].dtype=tf.float32)

b = tf.Variable([-.3].dtype=tf.float32)

Placeholders - these allow us to feed data to a tensorflow model from outside a model. It permits a value to be assigned later. To define a placeholder, we use the tf.placeholder() command. An example:

a = tf.placeholder (tf.float32)

b = a*2

with tf.Session() as sess:

result = sess.run(b,feed_dict={a:3.0})

print result

Sessions - a session is run to evaluate the nodes. This is called the Tensorflow runtime. For example:

a = tf.constant(2.0)

b = tf.constant(4.0)

c = a+b

# Launch Session

Sess = tf.Session()

# Evaluate the tensor c

print(sess.run(c))

Everything in a tensorflow is based on creating a computational graph. It has a network of nodes where each node operates, Nodes represent mathematical operations, and edges represent tensors. Since data flows in the form of a graph, it is also called a DataFlow Graph.

Suppose there is a wine shop purchasing wine from dealers, which they resell later. But some dealers sell fake wine. In this case, the shop owner should be able to distinguish between fake and authentic wine.

The forger will try different techniques to sell fake wine and make sure specific techniques go past the shop owners check. The shop owner would probably get some feedback from wine experts that some of the wine is not original. The owner would have to improve how he determines whether a wine is fake or authentic.

The forgers goal is to create wines that are indistinguishable from the authentic ones while the shop owner intends to tell if the wine is real or not accurately.

Let us understand this example with the help of an image shown above.

There is a noise vector coming into the forger who is generating fake wine.

Here the forger acts as a Generator.

The shop owner acts as a Discriminator.

The Discriminator gets two inputs; one is the fake wine, while the other is the real authentic wine. The shop owner has to figure out whether it is real or fake.

So, there are two primary components of Generative Adversarial Network (GAN) named:

The generator is a CNN that keeps keys producing images and is closer in appearance to the real images while the discriminator tries to determine the difference between real and fake images The ultimate aim is to make the discriminator learn to identify real and fake images.

This Neural Network has three layers in which the input neurons are equal to the output neurons. The network's target outside is the same as the input. It uses dimensionality reduction to restructure the input. It works by compressing the image input to a latent space representation then reconstructing the output from this representation.

Bagging and Boosting are ensemble techniques to train multiple models using the same learning algorithm and then taking a call.

With Bagging, we take a dataset and split it into training data and test data. Then we randomly select data to place into the bags and train the model separately.

With Boosting, the emphasis is on selecting data points which give wrong output to improve the accuracy.

Read more here:
Top Deep Learning Interview Questions and Answers for 2024 - Simplilearn

Read More..

Google AI heavyweight Jeff Dean talks about algorithmic breakthroughs and data center emissions – Fortune

Google sent a jolt of unease into the climate change debate this month when it disclosed that emissions from its data centers rose 13% in 2023, citing the AI transition in its annual environmental report. But according to Jeff Dean, Googles chief scientist, the report doesnt tell the full story and gives AI more than its fair share of blame.

Dean, who is chief scientist at both Google DeepMind and Google Research, said that Google is not backing off its commitment to be powered by 100% clean energy by the end of 2030. But, he said, that progress is not necessarily a linear thing because some of Googles work with clean energy providers will not come on line until several years from now.

Those things will provide significant jumps in the percentage of our energy that is carbon-free energy, but we also want to focus on making our systems as efficient as possible, Dean said at Fortunes Brainstorm Tech conference on Tuesday, in an onstage interview with Fortunes AI editor Jeremy Kahn.

Dean went on to make the larger point that AI is not as responsible for increasing data center usage, and thus carbon emissions, as critics make it out to be.

Theres been a lot of focus on the increasing energy usage of AI, and from a very small base that usage is definitely increasing, Dean said. But I think people often conflate that with overall data center usage of which AI is a very small portion right now but growing fast and then attribute the growth rate of AI based computing to the overall data center usage.

Dean said that its important to examine all the data and the true trends that underlie this, though he did not elaborate on what those trends were.

One of Googles earliest employees, Dean joined the company in 1999 and is credited with being one of the key people who transformed its early internet search engine into a powerful system capable of indexing the internet and reliably serving billions of users. Dean cofounded the Google Brain project in 2011, spearheading the companys efforts to become a leader in AI. Last year, Alphabet merged Google Brain with DeepMind, the AI company Google acquired in 2014, and made Dean chief scientist reporting directly to CEO Sundar Pichai.

By combining the two teams, Dean said that the company has a better set of ideas to build on, and can pool the compute so that we focus on training one large-scale effort like Gemini rather than multiple fragmented efforts.

Dean also responded to a question about the status of Googles Project Astraa research project which DeepMind leader Demis Hassabis unveiled in May at Google I/O, the companys annual developer conference. Described by Hassabis as a universal AI agent that can understand the context of a users environment, a video demonstration of Astra showed how users could point their phone camera to nearby objects and ask the AI agent relevant questions such as What neighborhood am I in? or Did you see where I left my glasses?

At the time, the company said the Astra technology will come to the Gemini app later this year. But Dean put it more conservatively: Were hoping to have something out into the hands of test users by the end of the year, he said.

The ability to combine Gemini models with models that actually have agency and can perceive the world around you in a multimodal way is going to be quite powerful, Dean said. Were obviously approaching this responsibly, so we want to make sure that the technology is ready and that it doesnt have unforeseen consequences, which is why well roll it out first to a smaller set of initial test users.

As for the continued evolution of AI models, Dean noted that additional data and computing power alone will not suffice. A couple more generations of scaling will get us considerably farther, Dean said, but eventually there will be a need for some additional algorithmic breakthroughs.

Dean said his team has long focused on ways to combine scaling with algorithmic approaches in order to improve factuality and reasoning capabilities, so that the model can imagine plausible outputs and reason its way through which one makes the most sense.

Those kind of advances Dean said, will be important to really make these models robust and more reliable than they already are.

Read more coverage from Brainstorm Tech 2024:

Wiz CEO says consolidation in the security market is truly a necessity as reports swirl of $23 billion Google acquisition

Why Grindrs CEO believes synthetic employees are about to unleash a brutal talent war for tech startups

Experts worry that a U.S.-China cold war could turn hot: Everyones waiting for the shoe to drop in Asia

Here is the original post:
Google AI heavyweight Jeff Dean talks about algorithmic breakthroughs and data center emissions - Fortune

Read More..

Types of Cyber Attacks You Should Be Aware of in 2024 – Simplilearn

Life today has become far more comfortable because of various digital devices and the internet to support them. There is a flip side to everything good, and that also applies to the digital world today. The internet has brought in a positive change in our lives today, but with that, there is also an enormous challenge in protecting your data. This gives rise to cyber attacks. In this article, we will discuss the different types of cyber attacks and how they can be prevented.

There are many varieties of cyber attacks that happen in the world today. If we know the various types of cyberattacks, it becomes easier for us to protect our networks and systems against them. Here, we will closely examine the top ten cyber-attacks that can affect an individual, or a large business, depending on the scale.

Elevate your cybersecurity acumen with our intensive Cyber security Bootcamp, where you'll delve into the diverse landscape of cyber attacks. From phishing to malware, ransomware to DDoS attacks, our comprehensive program equips you with the skills to anticipate, prevent, and mitigate a wide range of threats.

Lets start with the different types of cyberattacks on our list:

This is one of the most common types of cyberattacks. Malware refers to malicious software viruses including worms, spyware, ransomware, adware, and trojans.

The trojan virus disguises itself as legitimate software. Ransomware blocks access to the network's key components, whereas Spyware is software that steals all your confidential data without your knowledge. Adware is software that displays advertising content such as banners on a user's screen.

Malware breaches a network through a vulnerability. When the user clicks a dangerous link, it downloads an email attachment or when an infected pen drive is used.

Lets now look at how we can prevent a malware attack:

Phishing attacks are one of the most prominent widespread types of cyberattacks. It is a type of social engineering attack wherein an attacker impersonates to be a trusted contact and sends the victim fake mails.

Unaware of this, the victim opens the mail and clicks on the malicious link or opens the mail's attachment. By doing so, attackers gain access to confidential information and account credentials. They can also install malware through a phishing attack.

Phishing attacks can be prevented by following the below-mentioned steps:

It is a form of attack wherein a hacker cracks your password with various programs and password cracking tools like Aircrack, Cain, Abel, John the Ripper, Hashcat, etc. There are different types of password attacks like brute force attacks, dictionary attacks, and keylogger attacks.

Listed below are a few ways to prevent password attacks:

A Man-in-the-Middle Attack (MITM) is also known as an eavesdropping attack. In this attack, an attacker comes in between a two-party communication, i.e., the attacker hijacks the session between a client and host. By doing so, hackers steal and manipulate data.

As seen below, the client-server communication has been cut off, and instead, the communication line goes through the hacker.

MITM attacks can be prevented by following the below-mentioned steps:

A Structured Query Language (SQL) injection attack occurs on a database-driven website when the hacker manipulates a standard SQL query. It is carried by injecting a malicious code into a vulnerable website search box, thereby making the server reveal crucial information.

This results in the attacker being able to view, edit, and delete tables in the databases. Attackers can also get administrative rights through this.

To prevent a SQL injection attack:

A Denial-of-Service Attack is a significant threat to companies. Here, attackers target systems, servers, or networks and flood them with traffic to exhaust their resources and bandwidth.

When this happens, catering to the incoming requests becomes overwhelming for the servers, resulting in the website it hosts either shut down or slow down. This leaves the legitimate service requests unattended.

It is also known as a DDoS (Distributed Denial-of-Service) attack when attackers use multiple compromised systems to launch this attack.

Lets now look at how to prevent a DDoS attack:

As the name suggests, an insider threat does not involve a third party but an insider. In such a case; it could be an individual from within the organization who knows everything about the organization. Insider threats have the potential to cause tremendous damages.

Insider threats are rampant in small businesses, as the staff there hold access to multiple accounts with data. Reasons for this form of an attack are many, it can be greed, malice, or even carelessness. Insider threats are hard to predict and hence tricky.

To prevent the insider threat attack:

The term Cryptojacking is closely related to cryptocurrency. Cryptojacking takes place when attackers access someone elses computer for mining cryptocurrency.

The access is gained by infecting a website or manipulating the victim to click on a malicious link. They also use online ads with JavaScript code for this. Victims are unaware of this as the Crypto mining code works in the background; a delay in the execution is the only sign they might witness.

Cryptojacking can be prevented by following the below-mentioned steps:

A Zero-Day Exploit happens after the announcement of a network vulnerability; there is no solution for the vulnerability in most cases. Hence the vendor notifies the vulnerability so that the users are aware; however, this news also reaches the attackers.

Depending on the vulnerability, the vendor or the developer could take any amount of time to fix the issue. Meanwhile, the attackers target the disclosed vulnerability. They make sure to exploit the vulnerability even before a patch or solution is implemented for it.

Zero-day exploits can be prevented by:

The victim here is a particular group of an organization, region, etc. In such an attack, the attacker targets websites which are frequently used by the targeted group. Websites are identified either by closely monitoring the group or by guessing.

After this, the attackers infect these websites with malware, which infects the victims' systems. The malware in such an attack targets the user's personal information. Here, it is also possible for the hacker to take remote access to the infected computer.

Let's now see how we can prevent the watering hole attack:

An attacker impersonates someone or something else to access sensitive information and do malicious activities. For example, they can spoof an email address or a network address.

Perform to steal or manipulate others' personal information, like login someone's PINs to steal unauthorized access to their systems.

Performed by inserting malicious code into a software application to manipulate data. For example, the attacker puts malicious code into a SQL database to steal data.

Exploit software or hardware supply chain vulnerabilities to collect sensitive information.

Attacker uses the Domain Name System (DNS) to bypass security measures and communicate with a remote server.

Cyberattack in which an attacker manipulates the DNS records from a website to control its traffic.

Exploit vulnerabilities in the Internet of Things (IoT), like smart thermostats and security cameras, to steal data.

Encrypt the victim's data and demands payment in exchange.

Flood a website with traffic to make it unavailable to legitimate users and to exploit vulnerabilities in the specific network.

Send unauthentic emails to spread phishing scams.

Hackers use stolen login credentials to access others' bank accounts.

Hackers get close to a bank's computer systems to withdraw large amounts of cash from ATMs.

Target high-profile individuals like executives or celebrities using sophisticated social engineering techniques to get sensitive information.

Target specific individuals or groups under an organization. Attackers use social engineering techniques to get sensitive information.

A web browser interprets a URL (Uniform Resource Locator) and requests the corresponding web page to exploit vulnerabilities in the URL interpretation.

The hacker gets access to a user's session ID to authenticate the user's session with a web application and take control of the user's session.

An attacker gets unauthorized access to a system by trying various passwords until the correct one is found. It can be highly effective against weak passwords.

Targets websites and can insert SQL injection, cross-site scripting (XSS) and file inclusion.

Malware that appears to be a legitimate program but which contains malicious code. Once installed, it can perform malicious actions like stealing data and controlling the system.

The user's system is flooded with malware by visiting its compromised website to exploit vulnerabilities in other software to insert the malware without the user's knowledge.

An attacker inserts unauthorized code into a legitimate website to access the user's information to steal sensitive information like the user's passwords and credit card details.

An attacker intercepts communication between two parties to access sensitive information.

A cryptographic attack exploits the birthday paradox to access a collision in a hash function. The attacker successfully generates two inputs to get the same output hash value. This can be used to compromise to bypass access controls.

The attacker floods a system with heavy data to make it inaccessible to legitimate users. For instance, DDoS attacks in which various compromised computers flood a specific website with traffic to crash it.

Exploits vulnerabilities in network protocols to gain unauthorized access to a system or disrupt its regular operation. Examples include the Transmission Control Protocol (TCP) SYN Flood attack and the Internet Control Message Protocol (ICMP) Flood attack.

Targets the application layer of a system, aiming to exploit vulnerabilities in applications or web servers.

An attacker attempts to guess a user's password by trying a list of common words. This attack becomes successful because many users use weak or easy passwords.

Malicious software can replicate itself and spread to other computers. Viruses can cause significant damage to systems, corrupt files, steal information, and more.

Replicates itself and spreads to other computers, but unlike viruses, worms don't require human interaction.

This vulnerability allows attackers to bypass standard authentication procedures and gain unauthorized access to a system or network.

These software programs automate network or internet tasks. They can be used for malicious purposes, such as Distributed Denial of Service (DDoS) attacks.

Targets businesses and organizations by using email. The attackers impersonate a trusted source to trick the victim into transferring funds or sensitive information to the attacker.

Targets web applications by injecting malicious code into a vulnerable website to steal sensitive information or to perform unauthorized attacks.

Use artificial intelligence and machine learning to bypass traditional security measures.

Provide attackers privileged access to a victim's computer system. Rootkits can be used to hide other types of malware, such as spyware or keyloggers, and can be challenging to detect and remove.

Is malware designed to collect sensitive information from a victim's computer system. This can include passwords, credit card numbers, and other sensitive data.

is a technique cybercriminals use to manipulate users to make them divulge sensitive information or perform actions that are not in their best interest.

Is a malware designed to capture keystrokes a victim enters on their computer system. This can include passwords, credit card numbers, and other sensitive data.

Are networks of compromised computers controlled by a single attacker. Botnets can launch distributed denial of service (DDoS) attacks, steal sensitive information, or perform other malicious activities.

Is malware designed to steal sensitive information and spread it to other computers on a network. Emotet is often spread through phishing emails and can be very difficult to detect and remove.

Is malware that displays unwanted advertisements on a victim's computer system. Adware can be annoying and disruptive, but it's generally less harmful than other types of malware.

Doesnt rely on files to infect a victim's computer system. Instead, fileless malware executes malicious code using existing system resources, such as memory or registry keys.

Target individuals or organizations using highly targeted and personalized emails. Angler phishing attacks can be difficult to detect and are often successful in stealing sensitive information.

Is a cyberattack characterized by long-term, persistent access to a victim's computer system. APT attacks are highly sophisticated and difficult to detect and remove.

See more here:
Types of Cyber Attacks You Should Be Aware of in 2024 - Simplilearn

Read More..

One small update brought down millions of IT systems around the world. Its a timely warning – The Conversation

This weekends global IT outage caused by a software update gone wrong highlights the interconnected and often fragile nature of modern IT infrastructure. It demonstrates how a single point of failure can have far-reaching consequences.

The outage was linked to a single update automatically rolled out to Crowdstrike Falcon, a ubiquitous cyber security tool used primarily by large organisations. This caused Microsoft Windows computers around the world to crash.

CrowdStrike has since fixed the problem on their end. While many organisations have been able to resume work now, it will take some time for IT teams to fully repair all the affected systems some of that work has to be done manually.

Many organisations rely on the same cloud providers and cyber security solutions. The result is a form of digital monoculture.

While this standardisation means computer systems can run efficiently and are widely compatible, it also means a problem can cascade across many industries and geographies. As weve now seen in the case of CrowdStrike, it can even cascade around the entire globe.

Modern IT infrastructure is highly interconnected and interdependent. If one component fails, it can lead to a situation where the failed component triggers a chain reaction that impacts other parts of the system.

As software and the networks they operate in becomes more complex, the potential for unforeseen interactions and bugs increases. A minor update can have unintended consequences and spread rapidly throughout the network.

As we have now seen, entire systems can be brought to a grinding halt before the overseers can react to prevent it.

When Windows computers everywhere started to crash with a blue screen of death message, early reports stated the IT outage was caused by Microsoft.

In fact, Microsoft confirmed it experienced a cloud services outage in the Central United States region, which began around 6pm Eastern Time on Thursday, July 18 2024.

This outage affected a subset of customers using various Azure services. Azure is Microsofts proprietary cloud services platform.

The Azure outage had far-reaching consequences, disrupting services across multiple sectors, including airlines, retail, banking and media. Not only in the United States but also internationally in countries like Australia and New Zealand. It also impacted various Microsoft 365 services, including PowerBI, Microsoft Fabric and Teams.

As it has now turned out, the entire Azure outage could also be traced back to the CrowdStrike update. In this case it was affecting Microsofts virtual machines running Windows with Falcon installed.

Dont put all your IT eggs in one basket.

Companies should use a multi-cloud strategy: distributing their IT infrastructure across multiple cloud service providers. This way, if one provider goes down, the others can continue to support critical operations.

Companies can also ensure their business continues to operate by building in redundancies into IT systems. If one component goes down, others can step up. This includes having backup servers, alternative data centres, and failover mechanisms that can quickly switch to backup systems in the event of an outage.

Automating routine IT processes can reduce the risk of human error, which is a common cause of outages. Automated systems can also monitor for potential issues and address them before they lead to significant problems.

Training staff on how to respond when outages occur can manage a difficult situation back to normal. This includes knowing who to contact, what steps to take, and how to use alternative workflows.

Its highly unlikely the worlds entire internet could ever go down due to the distributed and decentralised nature of the internets infrastructure. It has multiple redundant paths and systems. If one part fails, traffic can be rerouted through other networks.

However, the potential for even larger and more widespread disruptions than the CrowdStrike outage does exist.

The catalogue of possible causes reads like the script of a disaster movie. Intense solar flares, similar to the Carrington Event of 1859 could cause widespread damage to satellites, power grids, and undersea cables that are the backbone of the internet. Such an event could lead to internet outages spanning continents and lasting for months.

Read more: Solar storms that caused pretty auroras can create havoc with technology heres how

The global internet relies heavily on a network of undersea fibre optic cables. Simultaneous damage to multiple key cables whether through natural disasters, seismic events, accidents, or deliberate sabotage could cause major disruptions to international internet traffic.

Sophisticated, coordinated cyber attacks targeting critical internet infrastructure, such as root DNS servers or major internet exchange points, could also cause large-scale outages.

While a complete internet apocalypse is highly unlikely, the interconnected nature of our digital world means any large outage will have far-reaching impacts, because it disrupts the online services weve grown to depend upon.

Continual adaptation and preparedness are vitally important to ensure the resilience of our global communications infrastructure.

Read the rest here:
One small update brought down millions of IT systems around the world. Its a timely warning - The Conversation

Read More..

Can We Survive Without Internet? – hackernoon.com

What would happen if, in the course of some war or conflict, China or Russia or some other rogue actor decide to disrupt the internet? The entire world relies on technology and inter-connectivity. If this global system collapses, would it be catastrophic or manageable?

Disrupting the internet on a global scale would involve targeting critical network infrastructure such as undersea cables, internet exchange points (IXPs), and major data centers which China and Russia are likely capable of doing.

Many regions would immediately lose connectivity, especially those heavily dependent on international data links. Even if not completely severed or entirely disrupted, bad network performance would lead to significant slowdowns and congestion due to rerouted traffic. Key online services, including cloud computing platforms, would certainly be disrupted, affecting businesses and individuals reliant on these important services.

Naturally, such a disruption will expose vulnerabilities in data security and integrity and attackers would likely exploit the chaos to breach sensitive data.

Significant data loss could occur if disruptions affect data centers, and it is highly likely that encrypted communications will be intercepted or disrupted, affecting secure data transfer.

In the event of such a catastrophic internet meltdown, IT operations would face numerous challenges. Hopefully, the IT guys are prepared for such a scenario.

Companies generally need robust incident response plans to quickly adapt to the changing network landscape - even more so in the event of a catastrophic internet breakdown. This means ensuring that backup systems are up-to-date and capable of handling increased loads or manual failovers.

IT teams would need to be on the lookout and maintain heightened vigilance against cyber-attacks such as phishing, malware, or ransomware attacks.

Of course, this is just the tech side of it.

The deliberate disruption of the internet would be seen as a highly provocative act and would require a serious international response.

The European Union and the United States, among other countries, will be expected to leverage severe diplomatic repercussions with widespread international condemnation and potential sanctions.

Cyber warfare, as real as it is now, may escalate into broader warfare, and affected nations will likely need to respond with retaliatory cyber-attacks, if there is any internet at all.

An attack on the internet by a rogue nation would likely lead to new alliances, cybersecurity treaties, and agreements among other countries.

Of course, it goes without saying that a large-scale internet disruption would have far-reaching economic consequences.

Global trade and supply chains will be heavily disrupted, affecting everything from financial markets to logistics. Businesses can expect significant economic losses due to the halt of online business operations and services.

Without a doubt, there will be massive upheaval in global stock markets due to uncertainty and loss of confidence.

The societal implications of such an event would be profound.

There would likely be widespread public panic and social unrest due to loss of access to information and communication tools.

As reliable news sources become inaccessible, we can expect to see an increase in misinformation and propaganda being spread.

It should be clear by now that the disruption of the internet would have a cascade of effects, from technical and operational challenges to significant geopolitical and societal impacts. Both IT experts and foreign affairs specialists would need to work together to navigate the immediate crisis and to develop strategies for enhancing resilience and security in the long term.

If anything, this proves that the global dependency on internet infrastructure is dangerous and there is a need for resilient, decentralized alternatives.

Read more:
Can We Survive Without Internet? - hackernoon.com

Read More..

Fraud Alert: Beware! 7% of All Internet Traffic Is Malicious – Moneylife

The internet has come a long way from its idealistic beginnings. The history of the internet has its origin in the efforts of scientists and engineers to build and interconnect computer networks since the 1950s. In 1974, Vint Cerf and Bob Kahn published a research note that evolved into transmission control protocol (TCP) and internet protocol (IP). However, it was British computer scientist Tim Berners-Lee, whose research at CERN in Switzerland resulted in www or world wide web, which linked hypertext documents into an information system that could be accessed from any node on the network. The rest, as they say, is history.

However, since the very beginning, scientists and engineers aimed to build networks for sharing information freely. Since these initial networks used for information sharing were known and trusted, they did not pay enough attention to the security aspect of data transfer. Although, in later years, scientists and engineers developed several protocols and measures for internet security, the basics remain the same. In other words, internet security will continue to stay in an evolving state, forever.

According to Cloudflare, during the past quarter, half of all hypertext transfer protocol (HTTP) DDoS attacks (DDoS attacks designed to overwhelm a targeted server with HTTP requests) were mitigated using proprietary heuristics that targeted botnets known to Cloudflare. "Another 29% were HTTP DDoS attacks that used fake user agents, impersonated browsers or were from headless browsers. An additional 13% had suspicious HTTP attributes, which triggered our automated system, and 7% were marked as generic floods. One thing to note is that these attack vectors, or attack groups, are not necessarily exclusive and known botnets also impersonate browsers and have suspicious HTTP attributes."

Information technology (IT) and services were ranked as the most targeted industry in the second quarter of 2024. Telecommunications, services providers and the carrier sector came in second. Consumer goods came in third place.

What is more worrying is that one out of every 25 respondents (customers) told Cloudflare that DDoS attacks against them were carried out by state-level or state-sponsored threat actors.

"Almost 75% of respondents reported that they did not know who attacked them or why. Of the respondents who claim they did know, 59% said it was a competitor who attacked them. Another 21% said the DDoS attack was carried out by a disgruntled customer or user, and another 17% said that the attacks were carried out by state-level or state-sponsored threat actors. The remaining 3% reported it being a self-inflicted DDoS attack," the report says.

According to Cloudflare, threat actor sophistication fuels the continued increase in DDoS attacks. It says, "In the first half of 2024, we mitigated 8.5mn (million) DDoS attacks, including 4.5mn in the first quarter (Q1) and 4mn in Q2. Overall, the number of DDoS attacks in Q2 decreased by 11% quarter-over-quarter (q-o-q) but increased 20% year-over-year."

"For context, in 2023, we mitigated 14mn DDoS attacks, and halfway through 2024, we have already mitigated 60% of last year's figure. Cloudflare successfully mitigated 10.2trn (trillion) HTTP DDoS requests and 57 petabytes of network-layer DDoS attack traffic, preventing it from reaching our customers' origin servers," it added.

Cloudflare says this ten-fold difference underscores the dramatic change in the threat landscape. "The tools and capabilities that allowed threat actors to carry out such randomised and sophisticated attacks were previously associated with capabilities reserved for state-level actors or state-sponsored actors. But, coinciding with the rise of generative artificial intelligence (AI) and autopilot systems that can help actors write better code faster, these capabilities have made their way to the common cyber-criminal."

According to the report, Libya ranked as the largest source of DDoS attacks in the second quarter of 2024, followed by Indonesia and the Netherlands. China is ranked the most attacked country in the world. After China, Turkey came second, followed by Singapore, Hong Kong, Russia, Brazil, and Thailand.

"Despite the majority of attacks being small, the number of larger volumetric attacks has increased. One out of every 100 network-layer DDoS attacks exceed 1mn packets per second (pps), and two out of every 100 exceed 500GBps (gigabits per second). On layer 7, four out of every 1,000 HTTP DDoS attacks exceed 1mn requests per second," Cloudflare says.

The majority of DDoS attacks are small and quick. However, Cloudflare says even these attacks can disrupt online services that do not follow best practices for DDoS defence. "Furthermore, threat actor sophistication is increasing, perhaps due to the availability of generative AI and developer copilot tools, resulting in attack code that delivers DDoS attacks that are harder to defend against."

However, Cloudflare is not the only one that blocks malicious DDoS attacks. For two days in August 2023, Amazon Web Services (AWS) detected a spike in HTTP/2 requests to Amazon CloudFront. HTTP/2 allows for multiple distinct logical connections to be multiplexed over a single HTTP session.

Last year in October, Google Cloud thwarted a DDoS attack that was seven and a half times bigger than it faced in 2022. The attackers used new techniques to try to disrupt websites and internet services.

While protecting against DDoS attacks can be challenging for common users, here are some steps that can be taken to mitigate the risk...

1. Use a reliable internet service provider (ISP)

ISP with DDoS protection: Choose an ISP that offers DDoS protection services. Many ISPs have built-in safeguards to detect and mitigate DDoS attacks.

2. Enable firewall and security features

Ensure that your router's firewall is enabled to block unauthorised traffic.

Use software (intrusion detection systems -IDS) that can detect unusual behaviour patterns, indicating a potential attack.

3. Keep software updated

Regularly update the firmware of your router and other network devices.

Ensure all software, including operating systems and applications, are up-to-date with the latest security patches.

4. Use strong passwords

Use complex, unique passwords for your router, network, and online accounts to prevent unauthorised access.

Enable multi-factor authentication (MFA) where possible to add an extra layer of security.

5. Implement network segmentation

Use different networks for different purposes (for example, guest network, intenet of things (IoT) devices) to limit the spread of an attack.

6. Use a VPN

A virtual private network (VPN) can help protect your IP address from being exposed, making it harder for attackers to target you.

7. Monitor network traffic

Use tools to monitor network traffic for unusual activity, which can indicate an ongoing attack.

Set up alerts for any abnormal spikes in traffic.

8. Educate yourself

Understand the basics of DDoS attacks and how they work.

Keep up-to-date with the latest security news and trends.

9. Utilise DDoS protection services

Consider using third-party DDoS protection services, especially if you run a website or an online service. Services like Cloudflare, Akamai, or AWS Shield can help mitigate attacks.

10. Regular backups

Regularly back up important data to recover quickly in case of an attack.

Have a disaster recovery plan in place for how to respond if an attack occurs.

Implementing these measures can significantly reduce your vulnerability to DDoS attacks and enhance overall network security.

Stay Alert, Stay Safe!

Read the original here:
Fraud Alert: Beware! 7% of All Internet Traffic Is Malicious - Moneylife

Read More..

Amazon Graviton4 server CPU shown beating AMD and Intel processors in multiple benchmarks – TechSpot

In context: Amazon's AWS Graviton line of Arm-based server CPUs is designed by subsidiary Annapurna Labs. It introduced the processors in 2018 for the Elastic Compute Cloud. These custom silicon chips, featuring 64-bit Neoverse cores, power AWS's A1 instances tailored for Arm workloads like web services, caching, and microservices.

Amazon Web Services has landed a haymaker with its latest Graviton4 processor. They're exclusive to AWS's cloud servers, but the folks at Phoronix have somehow managed to get their hands on a unit to give us a peek at its performance potential.

Graviton4 packs 96 Arm Neoverse V2 cores, each with 2MB of L2 cache. The chip also rocks 12 channels of DDR5-5600 RAM, giving it stupid amounts of memory bandwidth to flex those cores. Positioning this offering for R8g instances, AWS promises up to triple the vCPUs and RAM compared to the previous R7g instances based on Graviton3. The company also claims 30 percent zippier web apps, 40 percent faster databases, and at least 40 percent better Java software performance.

However, the real story lies in those benchmarks, which the publication ran on Ubuntu 24.04. In heavily parallelized HPC workloads like miniFE (finite element modeling) and Xcompact3d (complex fluid dynamics), Graviton4 demolished not just its predecessors but even AMD's EPYC 'Genoa' chips.

One particularly impressive showing was in the ACES DGEMM HPC benchmark, where the 96-core Graviton4 metal instance scored a staggering 71,131 points, smoking the second-place 96-core AMD EPYC 9684X at 53,167 points.

In code compilation, the Graviton4 significantly outpaced the Ampere Altra Max 128-core flagship but lagged behind the varying core count Xeon and EPYC processors. However, it beat the EPYC 9754 in the Timed LLVM Compilation test.

The surprises kept coming with workloads not necessarily associated with Arm chips. Graviton4 demolished the competition in 7-Zip compression. Cryptography is another strong suit, with the Graviton4 nearly tripling its predecessor's performance in algorithms like ChaCha20.

After testing over 30 different workloads, Phoronix concluded that the Graviton4 is hands down the fastest Arm server processor to date. It's giving current Intel and AMD chips a considerable run for their money across various tasks.

Of course, this silicon arms race will only heat up further with new chips like Intel's Granite Rapids and AMD's Turin on the horizon. For now, AWS has a performance monster on its hands with Graviton4.

Image credit: Phoronix

See original here:
Amazon Graviton4 server CPU shown beating AMD and Intel processors in multiple benchmarks - TechSpot

Read More..

Microsoft Servers Are Back After The World’s Biggest Outage Here’s What The Hell Happened – Pedestrian.TV

Thousands of businesses across Australia and the rest of the world are recovering after a massive IT outage caused chaos on Friday afternoon. But what exactly caused the outage, and is it likely to happen again?

Were aware of an issue with Windows 365 Cloud PCs caused by a recent update to CrowdStrike Falcon Sensor software, Microsoft said in a statement on X on Friday.

Massive disruptions that wreaked havoc on everything from radio and television, to banks and grocery stores after cybersecurity firm CrowdStrike pushed out a faulty content update on Windows servers. Servers running on Mac and Linux systems were not impacted by the outage.

CrowdStrike an American-based cybersecurity firm that offers a range of cloud-based security services to 538 of the Fortune 1000 companies launched the new update to its Falcon software on Friday, which caused a malfunction that disabled software worldwide. Ironically, the software is designed to protect against disruptions and crashes.

This system was sent an update and that update had a software bug in it and it caused an issue with the Microsoft operating system, CrowdStrikes CEO, George Kurtz told the US Today Show.

We identified this very quickly and remediated the issue, and as systems come back online, as theyre rebooted, theyre coming up and theyre working and now we are working with each and every customer to make sure we can bring them back online.

But that was the extent of the issue in terms of a bug that was related to our update.

If that wasnt enough, Microsofts own Azure cloud services also faced a major outage, causing even further issues for businesses. The two outages were unrelated, so I guess it was just a bad day for Microsoft.

The issue that prompted a blue screen of death for millions of users across the country was *not* the result of a cyberattack or hack so you dont have to worry about an ongoing threat to your security.

This is not a security incident or cyberattack. The issue has been identified, isolated and a fix has been deployed, Kurtz wrote on X. We refer customers to the support portal for the latest updates and will continue to provide complete and continuous updates on our website.

Kurtz added that customers remain fully protected, while apologising for the inconvenience and disruption.

Australian cybersecurity leader Alastair MacGibbon told the ABC that the issue wasnt malicious.

This is all about communication. This is about just reassuring the public that this doesnt appear to be a malicious act, MacGibbon said.

Of course, in slower time, it would be to try to understand how you could build systems to reduce the likelihood of this happening again.

You wouldnt be calling this a near miss. Its certainly a hit, but its a hit that wasnt malicious. And as a consequence, well learn more from it and therell be plenty of raking over the coals by government agencies and corporates all around the world.

At this point its probably easier to list the businesses that werent affected by the outage.

Low-cost airline Jetstar cancelled all Australia and New Zealand flights as a result of the outage, with flights only resuming at 2am on Saturday morning. Things should be largely back to normal today, but brace for delays if youre heading to the airport.

Jetstar said flights on Saturday are are currently planned to operate as scheduled. Please proceed to the airport as usual.

There may be a small number of flights impacted due to operational reasons. If your flight is impacted, we will communicate directly to you using the contact details on your booking, a statement on the Jetstar website read.

The outage also hit the airwaves, causing Triple J host Abby Butler to manually play the stations theme music out of her phone.

Self-serve checkouts and eftpos facilities at supermarkets and petrol stations also caused chaos, with some forced to close while others went cash-only.

Many major banks including Commbank and ANZ also had to close, which made getting cash out virtually impossible.

Rideshare services and delivery apps like Uber and Doordash also faced issues, which were likely caused by payment system outages.

The outage is being described as perhaps the biggest in history, but thankfully, it looks like it is already mostly resolved.

Deputy Secretary from the Home Affairs Cyber and Infrastructure Security Centre says the issue should self-resolve within the next hours and days.

There is no reason to panic, CrowdStrike are on it, it is not a cybersecurity incident and were working as fast as we can to resolve the incident, he said on X.

Most stores and services seem to be operating as normal on Saturday morning, with social media users reporting that even the Jetstar desk at Sydney Airport didnt look too manic.

It should go without saying that anyone catching a flight today should probably allow some extra time to avoid an airport-induced headache.

Read the original post:
Microsoft Servers Are Back After The World's Biggest Outage Here's What The Hell Happened - Pedestrian.TV

Read More..

UC Regents Approve New School of Computing, Information and Data Sciences at UC San Diego – University of California San Diego

Artificial intelligence and machine learning present new career opportunities for todays students who will become the next academic researchers and innovative professionals across sectors. Given the potential for these students to positively transform society, the San Diego Supercomputer Center and the Corporation for Education Network Initiatives in California (CENIC) are providing an AI Education infrastructure called CENIC AI Resource (CENIC AIR) for the entire CENIC community grades K-12, public libraries, community colleges and the California State University (CSU) system.

The infrastructure is interoperable with what UC San Diego uses internally to teach thousands of students, including HDSI students who participate in Capstone Program projects, across dozens of courses every quarter.

Going forward, SDSC and CENIC will use the National Data Platform a project aimed at a service ecosystem to make access to and use of scientific data open and equitable across a broad range of communities, including traditionally underrepresented researchers to integrate educational content curation into this infrastructure so that educators across all of these segments can share data, exercises, projects and course content.

We expect this innovative use of technology to significantly decrease the friction for students moving between segments, thus benefitting transfer students from community colleges to CSU or UC campuses,as well as K-12 to college students, and college to Ph.D. students, said Frank Wrthwein, director of SDSC.

Read more here:

UC Regents Approve New School of Computing, Information and Data Sciences at UC San Diego - University of California San Diego

Read More..

How to Plan for Your Next Career Move in Data Science and Machine Learning | by TDS Editors | Jul, 2024 – Towards Data Science

Feeling inspired to write your first TDS post? Were always open to contributions from new authors.

Data science and machine learning professionals are facing uncertainty from multiple directions: the global economy, AI-powered tools and their effects on job security, and an ever-shifting tech stack, to name a few. Is it even possible to talk about recession-proofing or AI-proofing ones career these days?

The most honest answer we can give is we dont really know, because as weve seen with the rise of LLMs in the past couple of years, things can and do change very quickly in this field (and in tech more broadly). That, however, doesnt mean we should just resign ourselves to inaction, let alone despair.

Even in challenging times, there are ways to assess the situation, think creatively about our current position and what changes wed like to see, and come up with a plan to adjust our skills, self-presentation, and mindset accordingly. The articles weve selected this week each tackle one (or more) of these elements, from excelling as an early-career data scientist to becoming an effective communicator. They offer pragmatic insights and a healthy dose of inspiration for practitioners across a wide range of roles and career stages. Lets dive in!

Read more here:

How to Plan for Your Next Career Move in Data Science and Machine Learning | by TDS Editors | Jul, 2024 - Towards Data Science

Read More..