Page 1,363«..1020..1,3621,3631,3641,365..1,3701,380..»

Revolutionizing Manufacturing with High-Performance Computing and Supercomputers – ENGINEERING.com

Oak Ridge National Laboratory Manufacturing Demonstration Facility. (Source: ORNL.)

Producing products quickly, efficiently and at low cost is a focal point for the manufacturing sector. There are a number of technologies that companies are using to achieve those goals including high performance computing (HPC) and supercomputing. The allure of cheap design and production optimization is strong, but a central barrier is the upfront costs of an enterprise owning an HPC or supercomputing machine.

Thats why in 2021, The U.S. Department of Energy began providing companies with HPC access via the High Performance Computing for Energy Innovation program. In addition to providing funding opportunities, the program allows companies to partner with national laboratories that have advanced computing resources up to 100 times more powerful than typical enterprise systems available for private sector use.

HPC and supercomputing technologies could usher in a manufacturing revolution marked by faster product development, improved product quality, reduced costs and increased efficiency.

Ideas sometimes work better in theory than in practice. Being able to visualize how a part or product will behave in the real world prior to the production process can save design teams a lot of headaches.

HPC and supercomputing technologies enable manufacturers to simulate and model products and production processes at a scale that was previously impossible. This allows manufacturers to design better products and optimize manufacturing processes. For example, in the automotive industry, manufacturers use HPC to simulate vehicle crashes and predict the behavior of materials under extreme conditions, which helps in designing safer and more durable vehicles.

A recent example is Tesla building its custom Dojo supercomputer to expand neural net training capacity using video data to advance computer vision technology to make self-driving vehicles safer.

In the aerospace industry, manufacturers use HPC to simulate how certain aircraft components will perform under varying conditions. Such simulations help manufacturers design more fuel efficient and reliable aircraft. For the semiconductor industry, HPC can help optimize performance through design simulation. Even complex systems such as cross-regional transportation networks can benefit from HPC simulation.

Design simulation is a primary area where HPC and supercomputers can help the manufacturing process. However, advanced simulation technologies have other uses during production and post-production.

Once products and systems are up and running, they need to be maintained. Simulation technologies can help pinpoint what maintenance needs to be performed in order to prevent equipment failures, which can be costly.

HPC and supercomputing simulations can help optimize the production process by helping manufacturers identify bottlenecks and efficiencies. For example, in the chemical industry, manufacturers use simulation and modeling technology to optimize the production process for chemicals such as polymers and plastics. This enables them to reduce the amount of raw materials and energy required to produce a given amount of product, resulting in significant cost savings. Similarly, advanced simulation technologies can help manufacturers like automakers simulate the performance of systems such as brakes under stressful, real-world-like conditions to correct any defects or deficiencies that the models identify.

In high-risk factory conditions, advanced simulations can also help train employees on equipment and tasks prior to their doing so in a real production environment. This can help reduce the risk of accidents while also enhancing worker productivity.

Getting products to market as fast as possible is a top concern for manufacturers. HPC and supercomputers can help companies stay ahead of the competition. For example, in the pharmaceutical industry, they can accelerate drug discovery by simulating the behavior of molecules and predicting their effectiveness at targeting diseases. This helps quicken the pace that drugs can move to clinical trials and ultimately enter the market.

Several studies and case studies demonstrate the benefits of using HPC and supercomputers to accelerate product development in manufacturing. A study conducted by the Council on Competitiveness found that the use of HPC and supercomputers in product design and development can reduce product development time and reduce the number of physical prototypes needed.

The U.S. Department of Energy's (DOE) High Performance Computing for Manufacturing program has funded several projects that demonstrate the benefits of using HPC and supercomputers in manufacturing. The Partnership for Advanced Computing in Europe (PRACE) has also funded several projects in the same vein.

Hewlett Packard Enterprise has made its HPE Cray portfolio available to the enterprise. The new HPE Cray EX and HPE Cray XD supercomputers speed up time-to-insight with massive performance and AI-at-scale benefits, delivered in a smaller data center footprint and at a lower price point. This allows manufacturers and other industries to harness insights, solve problems and innovate faster by delivering energy-efficient supercomputers in a smaller form factor and at a lower cost.

The simulation and modeling power of HPC and supercomputers helps reduce manufacturing costs by enabling the avoidance of errors during prototyping, reducing the time and resources needed for design and development and optimizing the supply chain.

The Council on Competitiveness found that using HPC and supercomputers can reduce design and development costs. By optimizing designs through simulation and modeling, manufacturers can avoid costly mistakes that may arise during physical prototyping and testing.

The Oak Ridge National Laboratory (ORNL) is helping manufacturers by developing innovative approaches to using its Spallation Neutron Source (SNS) supercomputer and the High Flux Isotope Reactor (HFIR) to allow researchers to examine microstructures to better design new materials and fabrication methods, and leverage multidisciplinary expertise for the development of new bio-based materials. These efforts are geared toward driving economic competitiveness, energy efficiency and productivity.

HPC and supercomputing systems are also being combined with robotics and automation to enhance manufacturing.

The technologies can analyze real-time data from sensors in factory environments so that robots can use the insights to adapt to changing conditions while maintaining accuracy and efficiency. The data analysis can also be used to optimize robotic systems for greater performance and efficiency. HPC and supercomputers can be used for virtual commissioning, allowing manufacturers to test and optimize robotic systems in a virtual environment before they are deployed in the real world. Supercomputers are also used to train and deploy machine learning models that can direct robots and autonomous systems to make more precise movements and decisions without human intervention.

A number of companies are using this approach, including GE, who has developed a software platform called Predix that combines HPC and supercomputers with the Internet of Things (IoT) to optimize the performance of its manufacturing equipment. This has helped to reduce downtime and improve overall efficiency. Siemens is using HPC and supercomputers to develop virtual commissioning tools such as the Tecnomatix Process Simulate Commissioning and Tecnomatix Plant Simulation Commissioning, which enable manufacturers to test and optimize robotic systems in a virtual environment.

The manufacturing sector is poised for a revolution driven by HPC, supercomputers and AI. Part of that will likely involve the advancement of quantum computing, which has applications for the manufacturing sector as well. Because quantum computers make simultaneous calculations versus the sequential calculations of classical machines, they could enable factory robots to move with greater efficiency and precision, driving better throughput for more complicated tasks. Quantum computers could also advance the creation of new materials for use as semiconductors, industrial production catalysts, electronic components, sustainable fuels, pharmaceuticals and consumer products. As these technologies continue to evolve, it is likely that we will see even more advanced and innovative applications in the manufacturing sector.

This story is one in a series underwritten by AMD and produced independently by the editors of engineering.com.Subscribe hereto receive informative infographics, handy fact sheets, technology recommendations and more in AMDs data center insights newsletter.

Here is the original post:
Revolutionizing Manufacturing with High-Performance Computing and Supercomputers - ENGINEERING.com

Read More..

IDrive gets even better with unlimited backups of your Google … – PCWorld

iDrive is trying something a little different in cloud backups: Backing up your entire Google Workspace or Microsoft 365 storage, for only $20 per year per computer.

Confused? Then lets, er, back up. IDrive is one of our most highly recommended cloud backup options, able to back up your PC to the cloud with either a free plan or paid options that go all the way to 20TB. Its basically one of the only good independent cloud storage/backup providers that isnt a major corporation like Google or Microsoft.

Due to be formally announced Thursday, IDrive will now offer unlimited cloud backup for both Google Workspace (Google Drive, Gmail, Calendar, and Contacts) and Office 365 (OneDrive, Exchange, SharePoint, and Teams), called Unlimited Cloud to Cloud Backup. What IDrive will do is back up all of your files in the Google or Microsoft cloud over to to IDrive as well. And if you pay for extra storage in either service, IDrive can accommodate all of those files.

Why do this? Its an additional layer of security for those who worry about cloud services going down and data being unable to accessed. It sounds a little far-fetched until you remember that Western Digitals cloud service was down for over a week. Thats why the rule of three exists: You should have your data stored in three places, just in case: a local copy and two backups. Thats what IDrive is offering.

IDrives Unlimited Cloud to Cloud Backup backs up data from the other clouds three times per day. It retains snapshots of previous versions, allowing you to restore your data from a given point in time and not just all of it at once. You can seek out and restore individual files and folders, too. IDrive considers this service to be an add-on, meaning that you can attach it to existing IDrive plans. (You can also buy Unlimited Cloud to Cloud Backup as a standalone service.)

IDrive seems to be pitching this as a solution for individuals, families or small businesses, since a demonstration video showed a backup of a Microsoft cloud with about twenty different users. The only catch, if there is one, is that IDrive charges the $20 annual fee (or roughly $1.66 per month) per computer for backing up one service, Google Drive or Microsoft Office 365.

Read more:
IDrive gets even better with unlimited backups of your Google ... - PCWorld

Read More..

Gartner: Begun the cloud file services wars have Blocks and Files – Blocks and Files

A report put out at the end of February titled Modernize Your File Storage and Data Services for the Hybrid Cloud Future by Gartner research VP Julia Palmer pulls a lot into focus about the cloud file services marketing war which B&F has written about in the past.

Palmer starts from more or less the same points: unstructured (file) data is growing and organizations are using the scalability of the public cloud for several things, bursting compute capacity, bursting storage, archival storage, remote worker access and so forth.

There are three groups of suppliers, she says:

A table lists representative suppliers in each category:

Next-generation file platforms have characteristics such as being designed to integrate and leverage public cloud IaaS. Vendors like NetApp and Qumulo have become software-defined and ported their filer operating systems to the public cloud, providing a file system and services abstraction layer covering the on-premises and public cloud worlds. VAST Data is heading the same way. Weka is there already.

At the other end of this spectrum hybrid cloud file data services are in the middle are the hybrid cloud file platforms. We have covered CTERA, Nasuni and Panzura often, also LucidLink and to an extent Vcinity, but not really included Zadara in this category. Nor have we looked at the public cloud suppliers themselves or Huawei or Oracle as being in this category.

The mid-spectrum players provide file data services, such as migration Atempo, Data Dobi, Data Dynamics, Peer Software and WANdisco, for example. Komprise also has a migration offering but is better known for its file lifecycle management capabilities, as is Nodeum. Hammerspace is pioneering its data orchestration technology.

Palmer really nails it with the a tripartite viewpoint, which is a neat way to group vendors and then to compare them in the three categories and map their capabilities to an organizations needs and direction.

It leaves us asking if the mid-spectrum hybrid cloud file data services players face having their capabilities provided as features by the next-generation file platforms on the one hand and the hybrid cloud file platforms on the other.

The Gartner paper doesnt look at object storage, which we think may be a limiting factor, especially as combined file and object offerings are becoming common. We can view S3 as AWSs hybrid cloud object platform and also look at Pure Storage, VAST Data and others as offering next-generation file+object platforms, while Cloudian and Scality layer file access on their object storage products, which already have public cloud support.

More here:
Gartner: Begun the cloud file services wars have Blocks and Files - Blocks and Files

Read More..

Quantum computing: What are the data storage challenges? – ComputerWeekly.com

Quantum computing will process massive amounts of information. Workloads could include diagnostic simulations and analysis at speeds far greater than existing computing. But, to be fully effective, quantum computing will need to access, analyse and store huge amounts of data.

There is an expectation that quantum computing will be the next step in the evolution of IT systems. Just as the multicore processor allowed computers to perform multiple tasks in parallel, quantum processors will be a leap forward in compute power and allow performance of complex tasks in a fraction of the time required now.

Quantum computers, as the name implies, use quantum mechanics the branch of physics concerned with atomic and subatomic particles to overcome the limitations of existing computer systems.

The principles of the superposition of states and quantum entanglement enable a different computation method from that used currently. A quantum computer can potentially store more states per unit of information called quantum bits, or qubits and operate with much more efficient algorithms at the numerical level.

Qubits are a two-state quantum-mechanical system. However, because of superposition, they can also be both of the two states 1 and 0 at the same time. In a classic computer system, a bit would have to be in one state or the other 1 or 0. Quantum mechanics allows a qubit to be in a coherent superposition of both states simultaneously, a property that is fundamental to quantum mechanics and therefore to quantum computing.

At the core of the quantum computers potential for achieving exponentially greater computational power lies the qubits capacity to exist in a state of superposition Martin Weides, Glasgow University

Ultimately, this will allow quantum computers to process complex tasks using large datasets far more quickly than a classic computer, especially in the realms of big data and pattern recognition. For example, quantum computers have potential applications in the pharmaceutical industry, to screen larger and more complex molecules than they were previously able to, and to map the complex interactions between a pharmaceutical product and its intended target.

At the core of the quantum computers potential for achieving exponentially greater computational power lies the qubits capacity to exist in a state of superposition, explains Martin Weides, professor of quantum technologies at Glasgow University. It gives you a statistical answer of likelihoods and then you repeat the calculation a number of times, and amplify that result. At the end, you get some outcome, but its not with 100% certainty.

One of the core challenges of quantum computers is that their storage systems are unsuitable for long-term storage due to quantum decoherence, the effect of which can build up over time. Decoherence occurs when quantum computing data is brought into existing data storage frameworks and causes qubits to lose their quantum status, resulting in corrupted data and data loss.

Quantum mechanical bits cant be stored for long times as they tend to decay and collapse after a while, says Weides. Depending on the technology used, they can collapse within seconds, but the best ones are in a minute. You dont really achieve 10 years of storage. Maybe in two decades we might get there, but its not required either.

Quantum computers will need data storage during computation, but that needs to be a quantum memory for storing super-positioned or entangled states, and storage durations are going to present a challenge.

So, its likely data storage for quantum computing will need to rely on conventional storage, such as in high-performance computing (HPC).

Considering the massive financial investment required for quantum computing, to introduce a limitation of cheap data storage elements as a cost-saving exercise would be counter-productive.

Given the data storage challenges and requirement to process large datasets, quantum computing is likely to be best accessed through the cloud. IBMs current quantum systems are cloud-connected, for example. Naturally, the effectiveness of cloud storage is dependent on network connectivity to the cloud.

Although quantum computing faces challenges in scalability and decoherence, its ability to perform multiple simultaneous operations in a fraction of the time it would take conventional processors means it is likely to become a powerful tool for analytics workloads.

Quantum computing and quantum storage are unlikely to replace existing compute and storage systems.

Using classical compute and storage infrastructure will remain the easiest and most economical solution for tackling everyday problems, especially those that involve small, simple, repeatable tasks.

That said, quantum computing promises to deliver incredible advances in fields that include materials science, climate research and pharmaceutical development. Organisations are already experimenting with quantum computing to develop lighter and more powerful batteries for electric cars, and to help create new medicines.

The limited storage capabilities associated with quantum computers means they will continue to be dependent on classical storage systems for data extraction and information output. However, these would have to be capable of handling large datasets. Some of todays high-end storage systems, especially those that are cloud-based, should be more than adequate for the task.

A quantum computer being so expensive would almost certainly be operated in a dedicated facility with lots of new hardware, including storage, concludes Weides.

Continue reading here:
Quantum computing: What are the data storage challenges? - ComputerWeekly.com

Read More..

Google to build two new data centers in Ohio in addition to an … – The Statehouse News Bureau

Ohio weather might be cloudy much of the year but Google is going to put even more clouds here.

The tech giant is bringing two more data centers to Ohio to handle cloud storage, artificial intelligence and more. And these data centers will be in addition to the one thats already located near the future site of Intel in central Ohio.

Google Vice President of Government and Public Policy Mark Isakowitz said the companys existing data center near Columbus has generated $13.9 billion of economic activity for tens of thousands of businesses. And now, he said the company will build two more one in Lancaster and the other in Columbus.

We have data centers in 13 states and nine countries and we are going to have three operational data centers right here in central Ohio so thats really exciting," Isakowitz said.

Isakowitz, an Ohio native, said it's unusual to have three data centers located so close together. But he said Ohio has the water, land and natural resources the company needs. And he said Google likes the local business partners here in Ohio.

"So you feel you can build out a workforce, have enough space and room to develop the data centers. It's a combination of policies, people, natural resources that creates a good ecosystem," Isakowitz said.

The new data centers will bring the companys investment in Ohio to more than $2 billion dollars - all in the central Ohio region. Democratic Columbus Mayor Andrew Ginther said the area has the fastest growing economy in the Midwest.

"Google is playing a big part in making the region a hub for cloud computing and priming the region for even more development. The future is very bright in Columbus and Central Ohio," Ginther said.

Google isnt saying how many new jobs will be created. And it won't divulge other terms of the agreement. Republican Gov. Mike DeWine isn't divulging that information either. But he said more of this type of development will happen if the state invests in job training and education.

"We have to keep bringing these companies into the state of Ohio and the best way to do that, frankly, is to show them that Ohio is dead serious about investing in our people," DeWine said.

DeWine's proposed two-year state budget included millions of dollars for K-12 education. He had proposed $300 million for tech center improvements alone. He's been urging lawmakers to keep that money in there as the budget goes through the legislative process.

Ohio has been the site of big investments by tech companies recently. Last year, Intel announced it would be building a $20 billion dollar computer chip factory in New Albany, east of Columbus. And Honda and LG Energy Solution have teamed up to build a $3.5 billion dollar battery plant in Jeffersonville, about halfway between Columbus and Cincinnati.

Originally posted here:
Google to build two new data centers in Ohio in addition to an ... - The Statehouse News Bureau

Read More..

Cloudy with a Chance of Bad Logs: Cloud Platform Log … – Mandiant

More and more organizations utilize cloud technology for applications, file storage, and more. However, if an attacker compromises a cloud environment, organizations may not know how to investigate those technologies, or may not even be logging the evidence that could allow the organization to identify what an attacker did.

This blog post describes a hypothetical scenario of a cloud platform compromise with multiple components that would require investigation. Each component is an example of a real intrusion tactic that Mandiant has investigated across various cloud platforms, sometimes with logs available and sometimes without logs available.

For each part of the compromise, we provide recommended logging configurations and investigation processes organized into cloud technology themes that group cloud services from Google Cloud Platform (GCP), Amazon Web Services (AWS), and Microsoft Azure together:

After reading through this scenario, you should be able to:

While we review many concepts, there are some limitations to be aware of in the scope of this post:

The attacker gained access to the Cloud Email platform through a credential stuffing attack against a cloud administrator account. Once the attacker found a valid password, the attacker authenticated with those credentials and the Cloud Email platform asked them which type of multi-factor authentication (MFA) process they preferred. The attacker chose the push option, which sent an approval request to the legitimate user. The administrator user deals with push authentication requests throughout the day for various services and mistakenly accepted the authentication request, which provided initial access to the attacker.

Once the attacker identified the cloud administrator credentials and authenticated, they logged in to the Cloud Management Console to identify other applications that the user could access.

The attacker identified that the cloud administrator account had access to the Cloud Authentication Services application and authenticated to it. In the Cloud Authentication Services application, the attacker changed the privileges of the cloud administrator to the highest global administrator account privileges available and removed the multi-factor requirement.

While in the Cloud Management Console, the attacker identified that the organization uses a custom Cloud Application. The attacker accessed the Cloud Code Repository with the global administrator account and identified the Cloud Application source code hosted there. The attacker accessed the code and identified plain-text hard-coded credentials for an application service account.

While in the Cloud Authentication Services application, the attacker identified that the Administrator had access to the Cloud Logging platform. The attacker authenticated to the Cloud Logging platform and searched logs for keywords related to plain-text credentials. The attacker exported logs that contained those keywords, particularly database user credentials.

The attacker returned to the cloud Authentication Service application and performed reconnaissance on systems and users. The attacker exported all environment objects including systems and accounts.

Next, the attacker pivoted to the Cloud Virtual Machine infrastructure and created a templated virtual machine. The attacker assigned the virtual machine to the application service account previously identified in the application source code. The attacker configured the Cloud Networking rules to allow remote desktop protocol (RDP) access from the internet. The application service account did not require MFA for any authentication activity because of its intended use. The attacker logged on to the virtual machine through RDP from their command and control (C2) server.

While logged on to the newly created virtual machine, the attacker identified a database server based on the hostname SQLDB01. The attacker moved laterally from the virtual machine they created to the database server via RDP using the application service account.

The attacker connected to the database, which utilized a Cloud Database Service backend, using the database user credentials previously identified in logs and explored the data by enumerating the table schema and running select * queries.

While logged on to the attacker-created virtual machine, the attacker also performed internal reconnaissance to identify other systems of interest. The attacker scanned the network for other systems using custom port scanning utilities that searched for open SSH, RPD, and SMB ports.

The attacker identified a network-shared file server that hosted files on a Cloud File Storage solution. After enumerating files stored on the network share, the attacker copied files to their C2 system using a bulk network file transfer utility.

While accessing the file server, the attacker also decided to stage further backdoors in trojanized files that are likely to be opened by users.

While logged on to cloud email for the administrator account, the attacker browsed through the last several days of messages. The attacker looked at email folders named finance and hr and downloaded attachments from sent messages.

The attacker shared the uploaded trojanized backdoor file through the collaboration platforms file sharing service with 20 users.

Several users messaged the administrators account and asked questions about errors opening the new document they downloaded through the collaboration platform based on an automated file shared email link. The attacker replied to tell the users the document is legitimate.

Finally, in an attempt to delay detection, the attacker created a mailbox rule to automatically delete replies to the compromised file share email.

The aforementioned hypothetical scenario took place in a matter of several days, reflecting how quickly the threat actors moved in the real scenarios this one is based on. In these cases, information security teams commonly have only a few medium priority alerts fire that go unnoticed due to the abundance of alerts feeding from their tools.

In this scenario, suspicion started when several helpdesk team members realized they had separate reports of users who had suspicious files shared with them. The helpdesk team escalated to Information Security per their documented processes and the Incident Response (IR) team started an investigation into the cloud file sharing platform associated with the file sharing.

The IR team quickly realized that the default logging available with their lowest cost license subscription recorded many useful logs such as:

Unfortunately, the investigation could not answer the question did the attacker access any email messages or synchronize any mailboxes? due to the default logging levels. The IR team also realized they were lucky the incident was detected relatively quickly because the default license subscription only stored logs for 90 days with their Cloud Logging platform.

After a post-mortem review several months later, the organization realized the IR team only reviewed collaboration platform authentications and did not cross reference against domain authentication logs. This meant that the internal team never identified that the attacker compromised the cloud infrastructure platform and performed follow-on activities such as creating and accessing a VM, elevating to domain administrator privileges, and interacting with file servers. They focused only the collaboration platform because the initial incident identification occurred after the sharing of files on the Collaboration Cloud File Sharing platform. The investigation had to be reopened several months later when evidence had started to disappear from Cloud Logging sources.

As the scenario demonstrates, attackers have a wider surface area to persist and steal data because of the adoption of cloud infrastructure and collaboration platforms. The move to these cloud platforms brings useful functionality and security features, but configuring everything correctly can be overwhelming for a team that is new to the technology.

Not only are there many access, permission, and protection configurations to consider, but teams should also make sure that they would be able to fully investigate various attacks that could happen by storing the correct logs.

Understanding what technologies your organization uses and performing threat modeling is one way to make sure you have these logs and investigative processes set up should you need to investigate.

For details on how Mandiant can assist with your cloud security, please check out the following resources:

The following attack path diagram visualizes how the actor accessed a wide range of cloud platforms from outside a standard perimeter in this scenario. The actor also used cloud technologies to interact with systems in the non-cloud environment as well through connections and integrations.

The following checklist is designed to be copied or printed for your cloud infrastructure logging review efforts. The provided logs are example categories of commonly utilized event logs for forensic investigations.

Reference Number

Technology

Log Type

1.1.1

Cloud Virtual Machines

Configure system event logs to follow standard endpoint logging policies for authentication, user activity, and privileged account use.

1.1.2

Cloud Virtual Machines

Log virtual machine management actions such as Start, pause, backup, snapshot, create, delete, and command executions etc.

1.1.3

Cloud Virtual Machines

Forward system logs to a log management platform or SEIM as part of standard polices and processes.

1.2.1

Applications or Functions

Log web server access to application including source IP address, protocol used, request parameters, response status, user agent, referrer, and response size. Ensure that source IP address is not overwritten by proxy or load balancer technology.

1.2.2

Cloud Applications, Containers, and Functions

Log creation, modification, and access to application code.

1.2.3

Cloud Applications, Containers, and Functions

Record successful and failed authentication activity including source IP address.

1.2.4

Cloud Applications, Containers, and Functions

Log application user activity including user account, information viewed, actions performed, and sensitive data accessed.

1.2.5

Cloud Applications, Containers, and Functions

Forward system logs to a log management platform or SEIM as part of standard polices and processes.

1.3.1

Cloud Database Services

Log database user authentication and source network address.

1.3.2

Cloud Database Services

Log data access including source network address and user.

1.3.3

Cloud Database Services

Log data modification and deletion including source network address and user.

1.3.4

Cloud Database Services

Forward system logs to a log management platform or SEIM as part of standard polices and processes.

1.3.5

Cloud Database Services

Log errors and long running queries, which could be indicative of data transfer or reconnaissance.

1.4.1

Cloud File Storage

Log user authentication.

1.4.2

Cloud File Storage

Log file creation, modification, upload, and deletion events with user account, IP address, and timestamp.

1.4.3

Cloud File Storage

Log file download events with user account, source IP address, and timestamp

1.4.4

Cloud File Storage

Log location, folder, and file permission changes.

1.4.5

Cloud File Storage

Log API access to file storage locations, folders, and files.

1.4.6

Cloud File Storage

Log file and directory listing metadata view.

1.4.7

Cloud File Storage

Turn on alerts for suspicious activity, including malware and mass downloads, if available.

1.5.1

Cloud Authentication Services

Log user authentication with timestamp, username, and source IP address.

1.5.2

View post:
Cloudy with a Chance of Bad Logs: Cloud Platform Log ... - Mandiant

Read More..

Independent Cloud Computing Leader Vultr Expands Global Footprint by Launching a New Data Center Location in Tel Aviv to Service the Growing Tech…

Vultr also opens up a new location in Manchester, England, expanding its global footprint to over 30 locations

WEST PALM BEACH, Fla., May 02, 2023--(BUSINESS WIRE)--Vultr, the worlds largest privately-held cloud computing company, today announced the latest expansion of its global footprint in Tel Aviv, Israel, to provide democratized access to cloud infrastructure to the growing technology ecosystem in the country. With the addition of Tel Aviv and its new location in Manchester, England, Vultr is continuing its cadence of global expansion by moving closer to eclipsing the data center availability offered by the big three hyperscaler cloud providers.

The Tel Aviv data center location puts Vultr on the map in Israel, where AWS and Azure have yet to establish availability zones. Vultr offers a full-stack infrastructure in both Tel Aviv and Manchester, including cloud and GPU compute, Kubernetes clusters, managed databases, storage-as-a-service, and more. Establishing access to infrastructure-as-a-service (IaaS) in Israel is the latest advance in Vultrs quest to provide full-stack infrastructure services in geographic regions underserved by the hyperscale cloud providers and democratize access to affordable cloud services for all organizations.

Vultrs availability in Tel Aviv now means that organizations with operations in Israel can access cloud compute infrastructure and services domestically to work locally and collaborate globally while maintaining data compliance and minimizing costly data transfer fees. The same applies to Vultrs Manchester data center location, which complements Vultrs presence in London and fortifies Vultrs already-solid position in the U.K. and beyond.

Headquartered in Tel Aviv, BBT.live uses Vultr for compute instances with its secured network connectivity solution, BeBroadband, enabling service providers to offer uncomplicated connectivity to their startup and enterprise customers around the world.

Story continues

"Vultr's adaptable model and exceptional engagement allow us to set up our Points of Presence (PoPs) on demand and deliver our services within a matter of hours rather than weeks to expand our business to new geographies rapidly," said Erez Zelikovitz, EVP, Chief Revenue Officer and Chief Product Officer at BBT.live. "We are delighted that Vultr is available at a Tel Aviv data center, close to our headquarters, which complements our already extensive list of locations across North America, Europe, and Asia where BeBroadband-Cloud PoPs are hosted."

"By bringing affordable, enterprise-grade cloud services to organizations around the world, Vultr is leveling the playing field for businesses striving to introduce breakthrough innovation in startup nations like Israel," said J.J. Kardwell, CEO of Vultrs parent company, Constant. "Vultr is breaking the big three hyperscalers stranglehold on customers in need of cloud infrastructure, who must endure the lock-in and exorbitant pricing associated with these inflexible cloud behemoths in exchange for IaaS access that isnt customized to each organizations unique profile of needs."

Technology, digital, and cyber security startups, alongside established enterprises in the country, now need high-performance cloud resources. Vultr is stepping into the market to provide access to flexible cloud resources - spanning from bare metal options to GPU compute available on demand. Vultr ensures that access to these valuable resources isnt limited to just the tech giants. Businesses looking to power generative AI solutions like ChatGPT or run other compute-intensive applications can now leverage the flexibility and cost-saving advantages Vultr brings to an ever-growing number of data center locations.

Vultr will host two events in Tel Aviv for the launch of its availability in Israel. A media breakfast briefing at The Norman Hotel on Tuesday, May 16, from 9:30-10:30 am IST. The industry event, Cloud as You Are, at Pop & Pope on Tuesday, May 16, from 6:00- 8:00 pm IST. For more information on both events and attendance information, visit https://experience.vultr.com/Tel-Aviv-Launch.html.

About Constant and Vultr

Constant, the creator and parent company of Vultr, is on a mission to make high-performance cloud computing easy to use, affordable, and locally accessible for businesses and developers around the world. Constant's flagship product, Vultr, is the worlds largest privately-held cloud computing platform. A favorite with developers, Vultr has served over 1.5 million customers across 185 countries with flexible, scalable, global Cloud Compute, Cloud GPU, Bare Metal, and Cloud Storage [A1] solutions. Founded by David Aninowsky, and completely bootstrapped, Constant has become one of the largest cloud computing platforms in the world without ever raising equity financing. Learn more at http://www.constant.com and http://www.vultr.com.

View source version on businesswire.com: https://www.businesswire.com/news/home/20230501005629/en/

Contacts

Ally Corlettvultrpr@scratchmm.com

More here:
Independent Cloud Computing Leader Vultr Expands Global Footprint by Launching a New Data Center Location in Tel Aviv to Service the Growing Tech...

Read More..

Are Your Cloud Costs Sky High? – BBN Times

Cloud storage is an important element of modern computing, allowing businesses to store and access data quickly and securely.

The costs of using a cloud storage service can be expensive. With more and more sectors relying on the cloud for data storage and transfer, it is no surprise that many seek a solution for the rising, sky high costs of cloud service providers. So, in this article, we'll discuss why your cloud costs may be reaching high levels, as well as ways to monitor and potentially reduce your cloud spending.

First and foremost, it is important to understand the different types of cloud storage services, and the associated costs. Cloud storage services can be broken down into three primary categories: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). Each of these has its own set of associated costs, ranging from setup fees to monthly usage fees. Plus you may also incur additional fees as you begin using the service, such as support fees for when you need the advice of expert cloud engineers.

What are you paying for? - Depending on the particular cloud storage service, you may be paying for different elements. In addition to the fees outlined above, you could also be charged for data transfer bandwidth, or usage of applications hosted from the cloud. You may also have to pay extra for additional storage capacity or specific features and services that are not included in a basic subscription package.

One reason your cloud costs may be high is because of inefficient use or excessive storage. If you are currently storing large amounts of data that isn't being used, this can significantly drive up your costs. Its important to periodically review the data stored in your cloud and delete any unnecessary files or folders.

While it may be easy to turn to the cost of the provider, what is also useful is reviewing what you are paying for and deciding whether this is necessary for your organization. Different providers may offer varying levels of storage, features, or services that could be more cost-effective for your organization. Before signing up to access cloud services from your chosen provider, be sure to compare different providers and packages. You may find that certain providers offer better packages on a month-to-month basis.

It sounds simple to suggest simply reviewing your payments and your usage to ensure that you are truly paying for what you need from your cloud service provider. However, you may be wondering how to do this as your usage and requirements often fluctuate.

One of the best ways to manage and optimize your cloud costs is to work with cloud cost optimization tools. These tools provide daily, real-time monitoring of your cloud usage and can produce reports and analyses of your spend.

For large tech firms this is incredibly useful as cloud costs can often exceed thousands of dollars per month.

Not only are these tools automated, but they are incredibly efficient and do not require any additional software engineering work. With other features such as autoscaling and security insights, your engineers can stay on top of your organizations cloud usage without lifting a finger.

Cost optimization tools work by analyzing your cloud system for any inefficiencies or waste. They can identify areas where you may have over-provisioned resources, unused services, or misconfigured settings that are driving up costs unnecessarily. Additionally, these tools provide cost forecasts and recommendations on how to reduce your costs without compromising performance or security. By implementing a cost optimization tool into your cloud strategy, you will benefit from accurate reporting, and monitoring that will not only help you keep your cloud costs low but offer a wide range of other benefits to your IT team.

Cloud storage is an essential part of modern computing, but it comes with the potential for sky high costs. By understanding the different types of cloud storage and associated costs, you can take steps to monitor and reduce your spending. Furthermore, cost optimization tools can help automate this process and provide insights into how you are using your cloud services. Not only offering automated analysis of your cloud spending but providing real-time, continuous monitoring, optimizing your usage.

See original here:
Are Your Cloud Costs Sky High? - BBN Times

Read More..

Network of Transnational Fraudsters Indicted for Racketeering in … – Department of Justice

A federal grand jury in Los Angeles has returned an indictment charging 14 defendants for their participation in a years-long scheme to steal millions of dollars from American consumers bank accounts, the Justice Department announced today.

According to court documents, Edward Courdy, 73, of Hawaiian Gardens, California; Linden Fellerman, 67, of Las Vegas; Guy Benoit, 68, of Cyprus; Steven Kennedy, 54, of Canada; Sayyid Quadri, of Canada; Ahmad Shoaib, 63, of Canada; John Beebe, 52, of Honolulu; Michael Young, 41, of Hollywood, Florida; Lance Johnson, 52, of Laveen, Arizona; Jenny Sullivan, 46, of Denver; Veronica Crosswell, 35, of Long Beach, California; Eric Bauer, 65, of Huntington Beach, California; Randy Grabeel, 71, of Pittsburg, California; and Debra Vogel, 68, of Las Vegas, were members and associates of a racketeering enterprise that unlawfully debited money from the bank accounts of unknowing U.S. consumer-victims.

Through various members and associates, the enterprise obtained identifying and banking information for victims, and created shell entities that claimed to offer products or services, such as cloud storage. The enterprise then executed unauthorized debits against victims bank accounts, which it falsely represented to banks were authorized by the victims. Some of the unauthorized debits resulted in returned transactions, which generated high return rates. To both conceal and continue conducting unauthorized debits, the enterprises shell entities also generated micro debits against other bank accounts controlled and funded by or for the enterprise. The micro debits artificially lowered shell entities return rates to levels that conspirators believed would reduce bank scrutiny and lessen potential negative impact on the enterprises banking relations.

Co-conspirator Harold Sobel was previously convicted for his role in the scheme in Las Vegas federal court and sentenced to 42 months in prison. In a related civil case also filed in Los Angeles federal court, injunctive relief and settlements totaling nearly $5 million were obtained against various persons, including several who are charged in this criminal indictment.

The scheme alleged in the indictment involved an elaborate plot to reach into consumers bank accounts and steal their hard-earned savings, said Principal Deputy Assistant Attorney General Brian M. Boynton, head of the Justice Departments Civil Division. The Department of Justice will use all of the tools at its disposal to prosecute such schemes.

This sophisticated scheme allegedly generated millions of dollars in revenue by stealing consumers personal information and then using that information to fraudulently reach straight into the bank accounts of thousands of Americans, said U.S. Attorney Martin Estrada for the Central District of California. The indictment alleges that an international network of fraudsters engaged in a wide-ranging ring which sought to victimize consumers while concealing their activities from banks and law enforcement authorities. Thanks to law enforcement, the defendants alleged efforts to continue this scheme have failed.

The U.S. Postal Inspection Service (USPIS) is committed to protecting the U.S. Postal Service and its customers, the American people, said Inspector in Charge Eric Shen of the USPIS Criminal Investigations Group. This case is illustrative of our efforts to protect American consumers from a sophisticated fraud scheme that cost American consumers millions of dollars. Postal Inspectors are proud to partner with the Department of Justice to put a stop to these types of schemes.

Courdy, Fellerman, Benoit, Kennedy, Quadri, Shoaib, Beebe, Young, Johnson, Sullivan, Crosswell, and Bauer are charged with racketeering conspiracy and wire fraud; Grabeel and Vogel are charged with racketeering conspiracy. Some defendants made their initial court appearances yesterday. If convicted, each defendant faces a maximum penalty of 20 years in prison for racketeering conspiracy and, if applicable, 30 years in prison for each count of wire fraud. A federal district court judge will determine any sentence after considering the U.S. Sentencing Guidelines and other statutory factors.

The department urges individuals to be on the lookout for unauthorized debits to their accounts. Regularly check your bank, credit card, and other financial statements and contact your financial institution if you see a charge you do not recognize. Report any fraudulent debit you identify to law enforcement. Reports may be filed with the FTC at http://www.ftccomplaintassistant.gov or at 877-FTC-HELP.

The USPIS is investigating the case.

Trial Attorneys Wei Xiang, Meredith Healy, and Amy Kaplan of the Justice Departments Consumer Protection Branch and Assistant U.S. Attorney Monica Tait for the Central District of California are prosecuting the case. The U.S. Attorneys Office for the Southern District of Texas provided substantial assistance.

The Consumer Protection Branch, in conjunction with the USPIS, is pursing wrongdoers who disguise the unlawful nature of business activities by, among other methods, artificially lowering financial account return rates. These tactics are designed to deceive banks, resulting in bank accounts remaining open and facilitating fraud schemes and other illegal activities, including schemes that debit consumers bank accounts without authorization, tech support scams, and subscription traps.

An indictment is merely an allegation. All defendants are presumed innocent until proven guilty beyond a reasonable doubt in a court of law.

Read more here:
Network of Transnational Fraudsters Indicted for Racketeering in ... - Department of Justice

Read More..

Curtiss-Wright unveils rugged, full-stack hybrid cloud solution for … – Asia Pacific Defence Reporter

Curtiss-WrightsDefense Solutions division, a developer and supplier of advanced Modular Open Systems Approach (MOSA)communications solutions for the U.S. Department of Defense (DoD), has collaborated with Nutanix to certify theNutanix Cloud Platform (NCP) solutionon thePacStar Modular Data Center (MDC), Curtiss-Wrights COTS-based, modular, tactical, and expeditionary rugged data center. The use of these interoperability tested technologies creates a combined solution that is capable of hosting cloud/storage, AI, and analytics applications. It enables warfighters to deploy data center-class computing and storage at the edge of tactical networks and rapidly set up a secure communications system in the field within hours instead of days. With support for the Nutanix Cloud Platform, a PacStar MDC-NR system can cluster together multiple PacStar servers, enabling shared compute and storage resources to run virtual machines (VM) and hypervisor containers.

Nutanix offers powerful tools for hybrid-cloud infrastructure and the robust data protection required for tactical operational environments, said Dominic Perez, CTO, Curtiss-Wright Defense Solutions division. When combined with the PacStar Modular Data Center, Nutanix Cloud Platform creates a powerful, transportable data center for the warfighter. We are excited to collaborate with Nutanix to host their powerful NutanixCloud Platform on our industry-leading PacStar MDC at the tactical edge of the battlefield.

We are proud to partner with Curtiss-Wright to provide a rugged, integrated solution to support in-theatre communications, CSfC, C5ISR/EW and other mission-critical applications and data vital to mission success, said Chip George, Vice President, Public Sector at Nutanix. Like the PacStar MDC, Nutanix software is hardened and secure upon delivery , reducing the burden of IT administrators, so that they can focus on the mission at hand. Especially for austere environments, the simplicity and ease of use of our software means that warfighters can easily deploy and manage this system without needing specialised technical training.

ThisCurtiss-Wright/Nutanix solutionprovides a common cloud platform and user experience, whether at the tactical edge or the HQ datacenter. Ideal for JADC2 enablement, the PacStar MDC-NR brings the Nutanix hybrid-cloud model to the tactical edge of the battlefield to integrate with leading cloud providers, such as Amazon Web Services Inc., Google Support Services LLC, Microsoft Corporation, and Oracle, all of which were recently selected by the Department of Defense (DoD) to support itsJoint Warfighting Cloud Capability (JWCC). Nutanix is the only HCI-provider included on the Department of Defense Information Network Approved Products List (DoDIN APL), having passed DISAs rigorous security and interoperability testing.

PacStar MDC-NR meets size, weight and power (SWaP) requirements unmatched by other COTS appliances of its type, which is ideal for defence, education, energy, first responder, and healthcare organisations. It brings NCP to the tactical edge, enabling customers to forward deploy/replicate cloud services and provide edge access to enterprise cloud systems. PacStar MDC-NR enables users to leverage the power of cloud computing in the battlefield to support applications such as video storage and analytics, intelligence analytics, and cybersecurity. PacStar MDC-NR can be used to host and store enterprise or tactical applications, and it delivers the high-performance compute power (Intel Xeon-D processing in 8-core, 12-core, and 16-core variants) demanded by AI and ML applications.

Go here to read the rest:
Curtiss-Wright unveils rugged, full-stack hybrid cloud solution for ... - Asia Pacific Defence Reporter

Read More..