Page 775«..1020..774775776777..780790..»

Could AI communicate with aliens better than we could? – Space.com

If the search for extraterrestrial intelligence (SETI) is successful, we may require the help of artificial intelligence (AI) to understand what the aliens are saying and, perhaps, talk back to them.

In popular culture, we've gotten used to aliens speaking English, or being instantly understandable with the help of a seemingly magical universal translator. In real life, it might not be so easy.

Consider the potential problems. Number one would be that any potential aliens we encounter won't be speaking a human language. Number two would be the lack of knowledge about the aliens' culture or sociology even if we could translate, we might not understand what relevance it has to their cultural touchstones.

Eamonn Kerins, an astrophysicist from the Jodrell Bank Centre for Astrophysics at the University of Manchester in the U.K., thinks that the aliens themselves might recognize these limitations and opt to do some of the heavy lifting for us by making their message as simple as possible.

"One might hope that aliens who want to establish contact might be attempting to make their signal as universally understandable as possible," said Kerins in a Zoom interview. "Maybe it's something as basic as a mathematical sequence, and already that conveys the one message that perhaps they hoped to send in the first place, which is that we're here, you're not alone."

Related: Could AI find alien life faster than humans, and would it tell us?

Indeed, the possibility of receiving recognizable mathematical information pi, a burst of prime numbers in sequence (as was the case in the novel "Contact" by Carl Sagan) has been considered in SETI for decades, but it's not the only possible message that we might receive. Other signals might be more sophisticated in their design, trying to convey more complicated concepts, and this is where we hit problem number three: That alien language could be orders of magnitude more complex than human communication.

This is where we will need AI's help, but to understand how, first we must delve into the details behind the structure of language.

When we talk about a signal or a message being complex, we don't mean that the aliens will necessarily be talking about complex matters. Rather, it refers to the complexity underlying the structure of their message, their language. Linguists call this "information theory," which was developed by the cryptographer and mathematician Claude Shannon who worked at Bell Labs in New Jersey in the late 1940s, and was expanded on by linguist George Zipf of Harvard University.

Information theory is a way of distilling the information content of any given communication. Shannon realized that any kind of conveyance of information be it human language, the chemical exhalations of plants to attract predators to eat caterpillars on their leaves or the transmission of data down a fiber optic cable can be broken down into discrete units, or bits. These are like the 'quanta' of communication, such as the letters of the alphabet or a dolphin's repertoire of whistles.

In language, these bits cannot just go in any order. There is syntax, which describes the grammatical rules that dictate how the bits can be ordered. For example: In English, a 'q' at the beginning of a word is always followed by a 'u', and then the 'u' can be followed by a limited number of letters, and so on. Now suppose there is a gap 'quk'. We know from the syntax that there are only a few combinations of letters that can fill the gap 'ac' (quack), 'ar' (quark), 'ic' (quick) and ir (quirk). But, if the word is part of a sentence 'The duck went quk' then through context we know the missing letters are 'ac'.

By knowing the rules, or syntax, we can fill in the blanks. The amount missing that still allows us to complete the word of sentence is called "Shannon entropy," and thanks to its complexity, human languages have the highest Shannon entropy of any known form natural communication on the planet.

Meanwhile, Zipf was able to quantify these basic principles of Shannon's information theory. In any communication some of the little units, these fundamental bits, will appear more often than others. For example, in human language, letters such as a e, o, t and r appear far more often than q or z. When plotted on a graph with the most common units first (on the x-axis, their rate of occurrence on the y-axis), all human languages produce a slope with a gradient of 1. At the other extreme, a baby's random babbling results in a horizontal line on the graph, with all sounds being equally likely. The more complex the communication as the baby grows into a toddler and starts to talk, for example the more the slope converges on a 1 gradient.

A transmission of the digits of pi, for instance, would now carry a 1 slope. So instead of searching for technosignatures, the technologically-generated signals that could mark other advanced extraterrestrial civilizations, some researchers think that SETI should be specifically looking for signals with a 1 slope, regardless of whether they appear artificial or not, and the machine-learning algorithms that carefully sift through every scrap of data collected by radio telescopes could be configured to analyze each potential signal to determine whether a signal adheres to Zipf's Law.

Beyond that, alien communication could have a higher Shannon entropy than human language, and if it is much higher, it might make their language too difficult for humans to grasp.

But perhaps not for AI. Already, AI is being put to the test trying to understand communication from a non-human species. If it can pass that test, perhaps AI will be ready to tackle any alien messages in the future.

Denise Herzing, who is the Research Director at the Wild Dolphin Project in Jupiter, Florida, is one of the world's foremost experts in trying to understand what dolphins are saying to each other. Herzing has been swimming with dolphins and studying their communication for four decades, and has now introduced AI into the mix.

"We have two ways in which we're looking at dolphin communication, and they both use AI," Herzing told Space.com.

One way is listening to recordings of the various whistles and barks that make up the dolphins' own communication. In particular, a machine-learning algorithm is able to take a snippet of dolphin chat and break that communication down into discrete units on a spectrogram (a graph of sounds organized by frequency), just as Shannon and Zipf described, and then it labels each unique unit with a letter. These become analogous to words or letters, and Herzing is looking at the different ways they combine, or in other words their degree of order and structure.

"Right now we've identified 24 small units of sound that recombine within a spectrogram," said Herzing. "So you might have up-whistle 'A' followed by down-whistle 'B,' and so on, and this creates a symbolic code for a sequence of sound."

The machine-learning algorithm is then able to deeply analyze the sound recordings, searching for instances where that symbolic code is repeated.

"We're looking for interesting sequences that are somehow repetitive," said Herzing. "The algorithms then look for substitutions and deletions in the sequences, so you might have the same symbolic code but one little whistle is different. That's a learning algorithm that is pretty important."

That little difference could be because it incorporates a dolphin's signature whistle (every dolphin has its own unique signature whistle, a kind of identifier like human names) or because the context is different.

This is all solidly in line with Shannon's information theory, and Herzing is also interested in Zipf's law and how closely dolphin communication replicates that 1 slope.

"We're looking for language-like structures, because every language has a structure and a grammar that follows rules," said Herzing. "We're looking specifically for what the possibilities are for recombinational data are our little units of sound only found alone, or do some recombine with another sound?"

Herzing's team have been searching for bigrams occasions when two units frequently occur together, which might signify a specific phrase. More recently, they have also been searching for trigrams where three units occur in order regularly implying greater complexity.

This is exactly the way that AI would begin analyzing a real message embedded within a SETI signal. If the alien communication is more complex in structure and syntax than human languages then that tells us something about them; perhaps that their species is older than our own, which has given them enough time for their communication to evolve.

However, we still wouldn't know the context of what they are saying to us in the message. This is currently one of the challenges in understanding dolphin communication. Herzing has video footage of dolphin pods to see what they were doing whenever the AI detects a repeated vocalization of symbolic code, which allows Herzing to try and infer context to the sounds.

"But if you're dealing with radio signals, how are you ever going to figure out what the context of the message is?" asks Herzing, who also takes an interest in SETI. "Looking at animal sounds is an analog for looking at alien signals, potentially to build up the tools to categorize and analyze [the signals]. But for the interpretation part? Oh boy, I don't know."

Once we have received a signal from aliens, we may want to say something back to them. The difficulty in understanding context rears its head again here, too. As Spock says in the film "Star Trek IV: The Voyage Home," when discussing responding to an alien probe, "we could replicate the sounds but not the meaning. We'd be responding in gibberish."

Herzing is trying to circumvent this context problem by mutually agreeing with the dolphins what to call things. This is the essence of CHAT (Cetacean Hearing and Telemetry), which is the second way in which researchers are using AI to try and communicate with dolphins.

In its first incarnation, CHAT was a large device strapped around the chest of the user, receiving sounds via hydrophone (underwater microphone) and then producing sound through a speaker. The modern version is smartphone-sized and worn around the wrist. The idea is not to converse in 'dolphinese,' but to agree with the dolphins upon pre-programmed sounds for certain toys that the dolphins want to play with. For example, if they want to play with a hoop, they make the agreed-upon whistle for 'hoop'. If a diver wearing the CHAT device wants a dolphin to bring them a hoop, the underwater speaker can play the whistle for "hoop." The AI's job is to recognize the agreed-upon whistle amongst all the other sounds a dolphin makes amidst all the various sources of audio interference underwater, such as bubbles and boat propellers.

Herzing has observed that the dolphins have used the agreed-upon whistles, but in mostly different contexts. The problem, says Herzing, is spending enough time with any one particular dolphin to allow them to fully learn the agreed-upon sounds.

With aliens, their message will have traveled many light years; any two-way communication could take decades, centuries, millennia, if it is even possible at all. So whatever information we have about the aliens will be condensed into their original transmission. If, as Kerins suspects, they send something mathematical just as a signal to us that they are there and we are not alone, then we won't have to worry about deciphering it.

However if they do send a message that is more involved, then as Herzing is discovering with dolphins, the size of the dataset is crucial, so let's hope the aliens pack their message with information to give us and AI the best chance of at least assessing some of it.

See more here:
Could AI communicate with aliens better than we could? - Space.com

Read More..

Computer Vision at the Edge Can Enable AI Apps – Embedded Computing Design

October 11, 2023

Blog

Computer vision refers to the technological goal to bring human vision an information-rich and intuitive sensor to computers, enabling applications such as assembly line inspection, security systems, driver assistance and robotics.

Unfortunately, computers lack the ability to intuit vision and imagery like humans. Instead, we must give computers algorithms to solve domain-specific tasks.

We often take our vision for granted, and how that biological ability can interpret our surroundings, from looking in the refrigerator to check food expiration dates to watching intently for a traffic light to turn green.

Computer vision dates to the 1960s and was initially used for tasks like reading text from a page (optical character recognition) and recognizing simple shapes such as circles or rectangles. Computer vision has since become one of the core domains of artificial intelligence (AI), which encompasses any computer system attempting to perceive, synthesize or infer some deeper meaning from data. There are three types of computer vision: conventional or rules-based, classical machine learning, and deep learning.

In this article, Ill consider AI from the perspective of making computers use vision to perceive the world more like humans. Ill also describe the trade-offs of each type of computer vision, especially in embedded systems that collect, process and act upon data locally, rather than relying on cloud-based resources.

Conventional computer vision refers to programmed algorithms that solve tasks such as motion estimation, panoramic image stitching or line detection.

Conventional computer vision uses standard signal processing and logic to solve tasks. Algorithms such as Canny edge detection or optical flow can find contours or vectors of motion, respectively, which is useful for isolating objects in an image or motion tracking between subsequent images. These types algorithms rely on filters, transforms, heuristics and thresholds to extract meaningful information from an image or video. These algorithms are often a precursor to an application-specific algorithm such as decoding information within a 1-D barcode, where a series of rules decode the barcode upon the detection of individual bars.

Conventional computer vision is beneficial in its straightforwardness and explainability, meaning that developers can analyze the algorithm at each step and explain why the algorithm behaved as it did. This can be useful in software auditing or safety-critical applications. However, conventional computer vision often requires more expertise to implement properly.

The algorithms often have a small set of parameters that require tuning to achieve optimal performance in different environments. Implementation can be difficult, especially for optimized, high-throughput applications. Some rules, algorithmic decisions or parameter values may have unexpected effects on images that do not fit original expectations, such that it becomes possible to trick the algorithm. Such vulnerabilities and edge cases can be difficult to fix without exposing new edge cases or increasing the algorithms complexity.

Machine learning emerged as a class of algorithms that use data to set parameters within an algorithm, rather than direct programming or calibration. These algorithms, such as support vector machine, multilayer perceptron (a precursor to artificial neural networks) and k-nearest neighbor, saw use in applications that were too challenging to solve with conventional computer vision. For example, recognizing a dog is a difficult task to program on a traditional computer vision algorithm, especially where complex scenery and objects are also present. Training a machine learning algorithm to learn parameters from 100 s or 1000 s of sample images is more tractable. Edge cases are solved by using a dataset that contains examples of those edge cases.

Training is computationally intensive, but running the algorithm on new data requires far fewer computing resources, making it possible to run in real time. These trained models generally have less explainability but are more resilient to small, unplanned variations in data, such as the orientation of an object or background noises. It is possible to fix variations that are not handled well by retraining with more data. Larger models with more parameters often boast higher accuracy, but have longer training times as well as more computations needed at run time, which has historically prevented very large models from use in real-time applications on embedded processors.

Classical machine learning-based approaches to computer vision still require an expert to craft the feature set on which the machine learning model is trained. Many of these features are common to conventional computer vision applications. Not all features are useful, thus requiring analysis to prune uninformative features. Implementing these algorithms effectively requires expertise in image processing as well as machine learning.

Deep learning refers to very large neural network models operating on largely unprocessed or raw data. Deep learning has made a large impact on computer vision by pulling feature extraction operations into the model itself, such that the algorithm learns the most informative features as needed. The following figure shows the data flow in each computer vision approach.

Deep learning has the most generality among the types of computer vision; neural networks are universal function approximators, meaning they have the capability of learning any relation between input and output (to the extent that the relation exists). Deep learning excels at finding both subtle and obvious patterns in data, and is the most tolerant to input variations. Applications such as object recognition, human pose estimation and pixel-level scene segmentation are common use cases.

Deep learning requires the least direct-tuning and image processing expertise. The algorithms rely on large and high-quality data sets to help the general-purpose algorithm learn patterns by gradually finding parameters that optimize a loss or error metric during training. Novice developers can make effective use of deep learning because the focus shifts from the algorithms implementation toward data-set curation. Furthermore, many deep learning models are publicly available such that they can be retrained for specific use cases. Using these publicly available models is straightforward; developing fully custom architectures does, however, require more expertise.

Compared to conventional computer vision and classical machine learning, deep learning has consistently higher accuracy and is rapidly improving due to immense popularity in research (and growingly, commercial) communities. However, deep learning typically has poor explainability since the algorithms are very large and complex; images that are completely unlike the training data set can cause unexpected, unpredictable behavior. Because of their size, deep learning models are so computationally intensive that special hardware is necessary to accelerate them for real-time operation. Training large models on large data sets can be costly, and curating a large data set is often time-consuming and tedious.

However, improvements in processing power, speeds, accelerators such as neural processing units and graphics processing units, and improved software support for matrix and vector operations have made the increase in computation requirements less consequential, even on embedded systems. Embedded microprocessors like the AM6xA portfolio leverage hardware accelerators to run deep learning algorithms at high frame rates.

So which type of computer vision is best?

That ultimately depends on its application, as shown in Figure 2.

In short, computer vision with classical machine learning rests between the other two methods for most attributes; the set of applications that benefit compared to the other two approaches is small. Conventional computer vision can be sufficiently accurate and highly efficient in straightforward, high-throughput or safety-critical applications. Deep learning is the most general, the easiest to develop for, and has the highest accuracy in complex applications and environments, such as identifying a tiny missing component during PCB assembly verification for high-density designs.

Some applications benefit from using multiple types of computer vision algorithms in tandem such that they cover each others weak points. This approach is common in safety-critical applications dealing with highly variable environments, such as driver assistance systems. For example, you could employ optical flow using conventional computer vision methods alongside a deep learning model for tracking nearby vehicles, and use an algorithm to fuse the results to ascertain whether the two approaches agree with each other. If they do not, the system could warn the driver or start a graceful safety maneuver. Alternatively, it is possible to use multiple types of computer vision sequentially. A barcode reader can use deep learning to locate regions of interest, crop those regions, and then use a conventional CV computer vision algorithm to decode.

The barrier to entry for computer vision is progressively lowering. Open source libraries like OpenCV provide efficient implementations of common functions like edge detection and color conversion. Deep learning runtimes like tensorflow-lite and ONNX runtime enable deep learning models to run efficiently on embedded processors. These runtimes also provide interfaces that custom hardware accelerators can implement to simplify the developers experience when they are ready to move an algorithm from the training environment on PC or cloud to inference on the embedded processor. Many deep learning architectures are also openly published such that they can be reused for a variety of tasks.

Processors in the Texas Instruments (TI) AM6xA portfolio, such as the AM62A7, contain deep learning acceleration hardware as well as software support for a variety of conventional and deep learning computer vision tasks. Digital signal processor cores like the C66x and hardware accelerators for optical flow and stereo depth estimation also enable high performance conventional computer vision tasks.

With processors capable of both conventional and deep learning computer vision, it becomes possible to build tools that rival sci-fi dreams. Automated shopping carts will streamline shopping; surgical and medical robots will guide doctors to early signs of disease; mobile robots will mow the lawn and deliver packages. If you can envision it, so can the application youll build. See TIs edge AI vision page to explore how embedded computer vision is changing the world.

Reese Grimsley is a Systems Applications Engineer with the Sitara MPU product line within TIs Processors organization. At TI, Reese works on image processing, machine learning, and analytics for a variety of camera-based end-equipment in industrial markets. One of his focal areas is demystifying Edge AI to help both new and experienced customers understand how they can quickly and easily bring complex deep learning algorithms to their products and improve accuracy, performance, and robustness.

More from Reese

View post:
Computer Vision at the Edge Can Enable AI Apps - Embedded Computing Design

Read More..

Stay safe and secure this National Cyber Security Awareness Month … – Marquette Today

Its October and that means its time to celebrate National Cyber Security Awareness Month.

With a little knowledge, a dash of effort and a few minutes of time, you can keep your sensitive data and computer systems locked down tight. Cybersecurity does not have to be intimidating, and it does not require a large investment of time or money. In fact, you can secure your digital life with trusted free tools, and now many cybersecurity best practices can be automated.

Here are the National Cyber Security Alliances top 10 tips to stay safe online:

LEARN MORE

Spam and Phishing: Cybercriminals spend each day polishing their skills in luring people to click on malicious links or open bad attachments.

Online Shopping: Just like you would watch your wallet when at the store, its crucial to protect yourself when shopping online.

Back it Up: Protect yourself against data loss by making backups electronic copies of important files.

Malware, Botnets and Ransomware: The internet is a powerful, useful tool, but in the same way that you shouldnt drive without buckling your seat belt or ride a bike without a helmet, you shouldnt venture online without taking some basic precautions.

Romance Scams: We all know that people online arent always as they appear. However, tens of thousands of internet users fall victim to online romance scams each year, and it can happen to anyone.

Tax Time Safety: Tax season can be a stressful time for many Americans, and while scams are prevalent year-round, there is often a greater proliferation during tax time. Stay safe online while filing your taxes with these best practices, tips and resources.

Spring Clean Your Online Life: A messy digital life leaves your money, identity and personal information vulnerable to bad actors. Keep yourself and your family safe online with these quick tips for a spotless digital space.

Vacation and Travel Tips: Stay cyber safe while away from home by following some simple practices to help keep your devices safe and your vacation plans from going awry.

More information about the National Cyber Security Alliance and National Cyber Security Awareness Month can be found here.

Read the rest here:
Stay safe and secure this National Cyber Security Awareness Month ... - Marquette Today

Read More..

Verizon hosts cybersecurity event at its new executive business … – Verizon

NEW YORK - Verizon Business will host a media event on Wednesday, October 18, 2023, to celebrate20 years of cybersecurity consulting services and to commemorate Cybersecurity Awareness Month. Of particular interest is a panel discussion led by industry experts, Chris Novak, Managing Director of Verizon Cyber Security Consulting; Sean Atkinson, Chief Information Security Officer at the Center for Internet Security; and Krista Valenzuela, Cyber Threat Outreach and Partnerships, NJCCIC.

The panel will featurea thought-provoking discussion on topics including data privacy, the role of AI in cybersecurity, the rise of voice security, the evolution of security controls and other relevant topics. Additionally, the panelists will discuss how local organizations in New Jersey are combating new cybercrime threats. Attendees will learn how Verizon is helping companies leverage 5G network solutions to create new, innovative technologiesto help better secure and elevate their businesses.

Raising greater awareness about cybersecurity is the first step in helping organizations defend against these threats, said Chris Novak, Managing Director of Verizon Cyber Security Consulting. Verizon is leveraging its networks broad visibility to gather, report and share actionable insights that our customers and other businesses can use to combat new, sophisticated cyberthreats involving vulnerability exploitation and social engineering.

Organizations need to go on the offensive to implement a stronger, more effective cybersecurity strategy, and then use the necessary tools to execute that strategy, said Sean Atkinson, Chief Information Security Officer at the Center for Internet Security. Many businesses are prioritizing this and are intensifying their cybersecurity efforts.

One of the ways we are harnessing the power of AI in New Jersey is in identifying malicious and suspicious websites to assist the State and its critical infrastructure in better defending against these threats. said Krista Valenzuela, Cyber Threat Outreach and Partnerships, NJCCIC.

In addition to the cybersecurity panel, there will be demos that showcase Verizonsexpertise in creating highly secure solutions. They include:

Coach-to-Coach Communications: a reliable, private wireless network solution that allows NFL coaches to communicate on the field.

Cashierless Checkout : a solution that utilizes machine learning and computer vision to enable autonomous stores at a location by incorporating 5G UWB and 5G Edge.

Private Wireless Networks:Showcasing the value of premised-based equipment and how private dedicated networks help improve business connectivity and security.

What:Verizon Cybersecurity Event

When: Wednesday, October 18th, 9:00 A.M. ET 12:00 P.M. ET.

Where: Executive Briefing Center (EBC),1 Verizon Way, Basking Ridge, NJ 07920, New Jersey.

Cybersecurity Panel: Beginning at9:00 A.M. ET,thepanel discussion will be led by Chris Novak,Managing Director of Verizon Cyber Security Consulting; Sean Atkinson, CISO at the Center for Internet Security; and Krista Valenzuela, Cyber Threat Outreach and Partnerships, NJCCIC. The panel will discuss how innovative tech is driving greater cybersecurityawareness and better business practices, followed by a media Q&A.

Read the original here:
Verizon hosts cybersecurity event at its new executive business ... - Verizon

Read More..

NSA releases a repository of signatures and analytics to secure … – National Security Agency

Cyber actors have demonstrated their continued willingness to conduct malicious cyber activity against critical infrastructure by exploiting Internet-accessible and vulnerable Operational Technology (OT) assets. To counter this threat, NSA has released a repository for OT Intrusion Detection Signatures and Analytics to the NSA Cyber GitHub. The capability, known as ELITEWOLF, can enable defenders of critical infrastructure, defense industrial base, and national security systems to identify and detect potentially malicious cyber activity in their OT environments.Civilian infrastructure has become an attractive target for foreign powers attempting to do harm to U.S. interests. Because of the increase in adversary capabilities, the vulnerability of OT systems, and the potential scope of impact, NSA recommends that OT critical infrastructure owners and operators implement ELITEWOLF as part of a continuous and vigilant system monitoring program.For more detailed information, visit the ELITEWOLF page on NSAs GitHub.

ELITEWOLF is being released as a follow up to the Protect Operational Technologies and Control Systems against Cyber Attacks Cybersecurity Advisory.

NSA Media RelationsMediaRelations@nsa.gov443-634-0721

See the original post here:
NSA releases a repository of signatures and analytics to secure ... - National Security Agency

Read More..

Eastern Shore Man Sentenced to 10 Years in Federal Prison for … – Department of Justice

Baltimore, Maryland - U.S. District Judge Ellen L. Hollander today sentenced Richard Wesley Robinson, age 74, of Cambridge, Maryland, to 10 years in federal prison, followed by 25 years of supervised release, for enticement and coercion of a minor to engage in sexual activity. Judge Hollander also ordered that, upon his release from prison, Robinson must register as a sex offender in the places where he resides, is an employee, and is a student, pursuant to the Sex Offender Registration and Notification Act (SORNA).

The sentence was announced by United States Attorney for the District of Maryland Erek L. Barron and Special Agent in Charge James C. Harris of Homeland Security Investigations (HSI) Baltimore.

According to his guilty plea, prior to July 17, 2018, Robinson communicated with a 12-year-old boy, using mobile phones and the internet to arrange a meeting for sexual activity. On July 17, 2018, Robinson met the victim at a park in Easton, Maryland, where Robinson engaged in sexual activity with the child. Robinson used his cellphone to document the sexual abuse of the minor victim.

In July of 2021, the National Center for Missing and Exploited Children (NCMEC) received a CyberTip report from Snapchat, reporting that Robinsons Snapchat account had uploaded suspected child pornography. Law enforcement later executed a search at Robinsons residence and seized two cellular phones and additional electronic media. Investigators forensically examined the content of the phones seized from Robinsons residence and reviewed the content of his Snapchat and Gmail accounts after obtaining search and seizure warrants. The sexually explicit images that Robinson produced of the victim on July 17, 2018 were found on both of Robinsons cell phones. After his abuse of the victim, Robinson sent text messages to others describing his sexual abuse of the boy and used Snapchat to distribute the sexually explicit images he took of the victim to others. In addition to distributing sexually explicit images of the victim to other internet users, Robinson also engaged in sexually explicit communication regarding minors. During these communications, Robinson discussed the sexual abuse of children, including a prepubescent child who was being cared for by another Snapchat user. On June 9, 2021, Robinson received sexually explicit images depicting the sexual abuse of a two-year-old male victim from that Snapchat user. After receiving the images, Robinson asked the Snapchat user about the abuse and encouraged the Snapchat user to take some pics.

This case was brought as part of Project Safe Childhood, a nationwide initiative launched in May 2006 by the Department of Justice to combat the growing epidemic of child sexual exploitation and abuse. Led by the United States Attorneys Offices and the Criminal Divisions Child Exploitation and Obscenity Section, Project Safe Childhood marshals federal, state, and local resources to locate, apprehend, and prosecute individuals who sexually exploit children, and to identify and rescue victims. For more information about Project Safe Childhood, please visit http://www.justice.gov/psc. For more information about Internet safety education, please visit http://www.justice.gov/psc and click on the Resources tab on the left of the page.

United States Attorney Erek L. Barron commended the HSI for its work in the investigation. Mr. Barron thanked Assistant U.S. Attorney Paul E. Budlow, who prosecuted the federal case.

For more information on the Maryland U.S. Attorneys Office, its priorities, and resources available to help the community, please visit http://www.justice.gov/usao-md/project-safe-childhood and https://www.justice.gov/usao-md/community-outreach.

# # #

Continue reading here:
Eastern Shore Man Sentenced to 10 Years in Federal Prison for ... - Department of Justice

Read More..

One Year in, Public Safety Threat Alliance Plans for Growth – Government Technology

Ransomware delayed Camden County, N.J., police investigations in March and April. A cyber attack disrupting Oakland, Calif., government in February, prevented residents from filing police reports. And in May, ransomware offlined computer-assisted dispatch for the Dallas fire department.

Cyber attacks continue to threaten public safety organizations and some believe better information sharing could help the agencies prepare against and stave off such threats.

Thats something Motorola Solutions has been hoping to tackle via an information sharing and analysis organization (ISAO) it launched last year.

Now, one year in, the ISAO is working to expand its international membership, grow its information sources and provide new supports, said Jay Kaine, PSTA director and director of the Cyber Fusion Center at Motorola Solutions.

Local public safety organizations are an ideal target for cyber extortionists, Kaine said. Their intolerance for disruption and their access to the municipal budget gives them both the funds and motive to pay up. Plus, theyre small enough that attacks are unlikely to provoke a significant federal response.

Kaine said this year has seen a rise in attacks impacting 911 centers and computer-aided dispatch.

Most attacks against public safety organizations are from profit-seeking ransomware actors, he said, although Russias invasion of Ukraine also prompted a swell of hacktivism. A Motorola report found pro-Russian hacktivism groups formed after the invasion drove hacktivism against public safety agencies up 179 percent in 2022 compared to 2021.

The PSTA shares threat intelligence and advice from various sources. That includes insights Motorola gets from its security management platform, penetration testing and risk assessments. The PSTA also hopes to bring in other public safety and telecom companies to also contribute intelligence.

The ISAO additionally encourages but doesnt require members to share information about the threats they encounter, to better forewarn peers.

What we want to be able to do is get to a point that we've built up essentially this neighborhood watch for public safety, Kaine said. So that if we get an attack in a small rural county, say in Washington state because as we know [with] the Internet, you're just an IP away from getting hit; it's a global threat and we can help our members in the U.K. or, say, in western Australia, inoculate themselves and at least be prepared for what might be coming their way in this space.

Some agencies still may be wary about sharing information, however, and the ISAO tries to set minds at ease about the security and privacy of details they disclose. That includes enabling anonymous information sharing through an encrypted portal and having members indicate the sensitivity level of information using the Traffic Light Protocol.

Public safety organizations range widely in cyber maturity, from more robust regional fusion centers to small rural agencies that have only one person handling IT alongside other duties.

To meet those varied needs, the PSTA provides simple cyber posture improvement tips helpful to less mature organizations, while also making available more in-depth details for organizations able to use them. The latter could include an automated STIX/TAXII feed providing real-time threat information, which the PSTA aims to introduce early next year.

Some of the other resources include quarterly webinars and daily spot reports.

Tabletop exercises are coming next year, too. PSTA looks to hold its first in Dallas in April 2024. This initial exercise will focus on walking through high-level strategies in face of an attack and will include federal and state partners.

To date, most PSTA participants are U.S.-based, and the ISAO is looking at increasing international reach. Kaine said he hopes to see the PSTA build membership in Australia, Canada, Latin America, NATO countries and the U.K.

More here:
One Year in, Public Safety Threat Alliance Plans for Growth - Government Technology

Read More..

Private Internet Access expands global server network to 91 countries – IT Brief Australia

Private Internet Access (PIA), a leading provider of virtual private network (VPN) services, has announced a significant expansion of its global server network. The company now operates servers in 91 countries, up from 84, strengthening its position as a key player in the cybersecurity sector.

This announcement comes during Cybersecurity Awareness Month and reflects PIA's continued commitment to bolstering internet security and privacy on a global scale. "We are thrilled to be expanding our server network to include these exciting new locations," stated Himmat Bains, Head of Product at Private Internet Access. "Our primary focus is to think about how we can best meet the changing demands of our users, and these additions are a result of their requests."

PIAs expansion includes the introduction of servers in seven new countriesBolivia, Ecuador, Guatemala, Nepal, Peru, Uruguay, and South Korea. Notably, this move makes PIA one of the few VPN providers to have server options in all 50 US states and now even more countries around the world.

The company explained that the selection of these specific locations was based on user demand and tailored to meet unique use cases. "We also took into account regions where we saw a need for more country options and carefully considered the unique use cases for each location," Bains added.

For fans of Korean dramas and television, the addition of a South Korean server is particularly noteworthy. This new server will give users unparalleled access to K-drama and other popular Korean content, meeting the growing global demand for this genre.

Apart from adding new countries to its server network, Private Internet Access is also extending the number of dedicated IP locations, incorporating hubs in Brussels, Stockholm, Houston, and San Jose (Silicon Valley).

Dedicated IP addresses are unique IPs assigned to users, offering a smoother online experience while maintaining privacy. PIAs unique, industry-leading token-based system ensures that the users dedicated IP remains completely unlinked to their email or account. This allows users to engage in activities like accessing IP-restricted networks and conducting online transactions without the inconvenience of extra security verifications.

"At Private Internet Access, our core commitment is to deliver the highest level of online privacy and security," emphasized Himmat Bains. "We continue to look for ways to give our users the extra flexibility and accessibility they need in a rapidly changing digital landscape. As always, we remain dedicated to safeguarding our users' online activities and protecting their personal data."

The company is also renowned for its 100% open source transparency and employs RAM-only servers with military-grade 256-bit AES encryption to ensure the utmost data security. As PIA continues its efforts to improve user experience through the expansion of their server network and recent app updates on iOS, Android, and Desktop, it appears well-positioned to continue leading the way in offering a secure, private and highly accessible internet experience.

See the original post here:
Private Internet Access expands global server network to 91 countries - IT Brief Australia

Read More..

Learning from Lets Encrypts 10 years of success – InfoWorld

Foundations have a hit-or-miss success rate in software, generally, and open source, specifically. Im on the record with 908 words of eyerollfor the Open Enterprise Linux Association and OpenTofu, given the conspicuous absence of cloud vendor support. Yet Ive also recommendedprojects like Kubernetesprecisely because of their foundation-led community support. Foundations can help foster community but are in themselves no guarantee of success.

This is why Lets Encrypt and the Internet Security Research Group (ISRG) are so fascinating. There is no obvious reason they shouldve succeeded, yet 10 years in, ISRGs Lets Encrypt has issued more than four billion certificates to secure more than 360 million websites. Its also likely that the nonprofits Prossimo, a memory safety project, and Divvi Up, a privacy-preserving metrics system, will follow that pattern, even as many other foundations fail to deliver similar victories (OpenStack, anyone?).

The question is why. Why did Lets Encrypt succeed, and what can other nonprofits or open source projects learn from it?

One key reason for Lets Encrypts success is that it solved a big problem. When Lets Encrypt was founded in 2013, just 28% of page loads were secured on the web. There were plenty of options that were available [like TLS and SSL], says Sarah Gran, vice president of communications at ISRG, but they were not widely used. In order to really advance the security of the web, this needed to change, and it needed to change more commensurate with the pace of the growth and dependence on the Internet that people were having every single day.

Lets Encrypt didnt try to change things with public service announcements. They focused on automation and reducing the complexity of getting a certificate. The more easily developers could adopt and apply certificates to their websites, the more likely they were to use them. Convenience is the killer app for developers, asRedMonks Steve OGrady has posited.

It also helped that ISRG and its Lets Encrypt initiative werent trying to compete with commercial certificate authorities. Were not here to be heroes, says Gran. All were trying to do is solve a problem. By working alongside proprietary providers of certificates, Lets Encrypt could focus on solving the problem of Internet security, not collecting credit for doing so.

When I asked Gran to identify the secret for ISRGs success with Lets Encrypt, she didnt hesitate: We know what we do well, and we stay in that lane. And what we do well is tackle difficult engineering infrastructure problems, particularly as they relate to Internet security, which ISRG tackles through the lens of automation, efficiency, and scale. ISRG focuses on solving discrete problems, and in so doing has achieved outsized success with Lets Encrypt. That same foundation-led focus should help it with Prossimo and Divvi Up.

Clearly, ISRGs foundation approach has worked, enabling it to work alongside corporate competitors without being competitive. However, its important to note that foundations arent essential to a software projects success. In the world of certificate authorities, Comodo and Digicert thrive alongside Lets Encrypt. Outside the realm of Internet security, its much the same story. It would be hard to argue that HashiCorp, MongoDB, Elastic, etc., arent wildly popular with attendant business success. Nor is it true that introducing a foundation to a market guarantees it will trounce single-vendor products. Speaking of HashiCorp, even as he launchedthe OpenTofu projectto provide an open source, foundation-backed fork of HashiCorps Terraform, Linux Foundation CEO Jim Zemlin told me that he believes both Terraform and OpenTofu will succeed for different reasons.

Terraform, in his view, will succeed because its great software with a credible company behind it. He also sees OpenTofu taking a big share of the market: Nobody wants to invest large engineering resources into a project that isnt neutrally owned or is owned and controlled by a single commercial entity. This will lead to better investment in OpenTofu. Despite the relatively small companies contributing to OpenTofu today, Zemlin believes downstream vendor dependence on the codeveloped OpenTofu will create a larger ecosystem as more providers reinvest to improve their downstream products.

Maybe. Foundation-led projects fail all the time.

Why did Kubernetes succeed while OpenStack failed, despite both being filled to the brim with foundation-led communities? According to Zemlin, it turns out containers [Kubernetes] were the right abstraction for cloud computing workloads and not VMs [OpenStack]. Technology matters. No foundation can overcome being on the wrong side of customer choice for particular technologies.

This brings us back to ISRG and its mission. Similar to its observation in 2015 about website security, today ISRG sees an equally big issue with memory safety. As Gran puts it, We looked at our infrastructure and various infrastructures out there that the Internet is reliant upon, and we saw how much of it is written in C and C++, with all their problems of memory safety, bugs, and vulnerabilities. Why is this a problem now? After all, such languages have had issues for a long time. Gran credits Microsoft and Google for acknowledging that the vast majority of their vulnerabilities stemmed from memory safety problems, which pinpointed memory safety as a big issue, and one that could be solved through languages like Rust.

Will they succeed in a similar way as Lets Encrypt? Nothing is certain, but the confluence of a big problem with a clear technology that can help (Rust, in this case) makes success far more likely. Whether youre a nonprofit foundation or a for-profit company, a focus on solving a customer problem, along with a bit of luck in customer technology choices, seems to guarantee success.

Go here to read the rest:
Learning from Lets Encrypts 10 years of success - InfoWorld

Read More..

Ofcom cloud report: Where the CMA should focus its probe into AWS … – ComputerWeekly.com

In this guest post, Owen Sayers, an enterprise security architect with Secon Solutions, who has 25 years experience working within the UKs internet security framework, sets out the areas where he thinks the Competition and Markets Authority (CMA) should focus its investigation into the hold that AWS and Microsoft have on the UKs cloud spend

As expected, the report published by Ofcom has resulted in the referral of the clear dominance of Microsoft and Amazon Web Services (AWS) in the UKs public cloud market to the Competition and Markets Authority (CMA).

Whilst it is right and proper that the CMA should determine its own scope for investigation, a number of areas already examined by Ofcom will undoubtedly be included.

There are, however, other areas that the CMA would be wise to include if they wish to build a full and fair view of the public cloud landscape in the UK today, and how it has been historically influenced or shaped to become what we now see.

Key areas the CMA should consider include the role and effect of UK government policy for cloud adoption, and whether the extent to which Microsoft and AWS may have been able to influence that is fair.

Successive iterations of the 2013 Cloud First policy have become progressively more prescriptive and specific in content; moving from the original position of consider cloud options first, through when we mean cloud we mean public cloud and finally to by cloud we mainly mean Software as a Service.

Taken in concert with the latest update in June 2023 , the CMA may conclude as many industry observers already do that the current distribution of cloud services in the UK, and their bulk reliance on just two global providers is largely down to HM government actions to shape the marketplace since 2014, and the ability of AWS and Microsoft to take advantage of those policies.

The CMA would also be wise to examine the means by which Microsoft took advantage of its huge desktop and server footprint in government to leverage cloud adoption after the UK published its cloud first policy in 2013.

Existing desktop end-user service licences (Windows O/S and Office suite) provided under Microsoft Enterprise Agreements were then bundled with cloud service options, enabling the transfer of users from desktop services to cloud services without new procurements or changes to contract terms.

In particular, the ability of Microsoft to transition large volumes of user identity information from desktop and server-based directories into their cloud equivalent services has given both an adoption incentive to organisations seeking to use this Microsoft licencing flexibility, and significant control to Microsoft over corporate user identity management in their cloud platforms.

It has also introduced a Microsoft-controlled technology barrier or dependency between their services and other cloud service providers and facilitated soft lock-in of organisations, due to the complexity, cost and disruption attendant with identity management changes for most customers.

The CMA might wish to examine if these individually or collectively represented use of an unfair market advantage on the part of Microsoft, which no other company could similarly enjoy.

The recently published HMG Security Classification Scheme specifically referred Government Security Advisors to Microsoft guidance and security advice ; for the first time openly listing a commercial provider as the source of technical advice for UK government security measures, a role previously uniquely held by the National Cyber Security Centre (NCSC).

Industry commentators expressed some surprise upon its publication that specific Microsoft cloud products were now listed within key Cabinet Office policies, and reflected upon both the extent to which the UK government is now dependent upon Microsoft cloud services and their potential ability to influence policy creation.

In its report, Ofcom touched upon the outputs of studies in other countries relating to concerns of governmental public cloud service use for national resilience, as well as reporting upon the decisions made by some UK Critical National Infrastructure (CNI) providers not to make use of these public cloud services since their availability and terms of service preclude such use.

In doing so those CNI providers have stepped outside of HMG policy, but their reasoning appears sound and broadly focussed on a wider public safety interest rather than self-promotion.

Whilst it may not be in the CMAs direct remit to examine these areas, the findings of the Ofcom report should be sufficient to question the extent to which core HMG services and regulated sectors have transitioned to public cloud services.

CMA may recommend that a national debate be established to examine the suitability of public cloud-based services for blue-light, CNI and core utilities. Such a recommendation, if made, would likely enjoy significant support from security advisors experienced in those areas.

Original post:
Ofcom cloud report: Where the CMA should focus its probe into AWS ... - ComputerWeekly.com

Read More..