Page 3,370«..1020..3,3693,3703,3713,372..3,3803,390..»

One Way to Prevent Police From Surveilling Your Phone – The Intercept

Federal agents from the Department of Homeland Security and the Justice Department used a sophisticated cell phone cloning attackthe details of which remain classifiedto intercept protesters phone communications in Portland this summer, Ken Klippenstein reported this week in The Nation. Put aside for the moment that, if the report is true, federal agents conducted sophisticated electronic surveillance against American protesters, an alarming breach of constitutional rights. Do ordinary people have any hope of defending their privacy and freedom of assembly against threats like this?

Yes, they do. Here are two simple things you can do to help mitigate this type of threat:

Without more details, its hard to be entirely sure what type of surveillance was used, but The Nations mention of cell phone cloning makes me think it was a SIM cloning attack. This involves duplicating a small chip used by virtually every cellphone to link itself to its owners phone number and account; this small chip is the subscriber identity module, more commonly known as SIM.

Heres how SIM cloning would work:

SIM cards contain a secret encryption key that is used to encrypt data between the phone and cellphone towers. Theyre designed so that this key can be used (like when you receive a text or call someone) but so the key itself cant be extracted.

But its still possible to extract the key from the SIM card, by cracking it. OlderSIM cards used a weaker encryption algorithm and could be cracked quickly and easily, but newer SIM cards use stronger encryption and might take days or significantly longer to crack. Its possible that this is why the details of the type of surveillance used in Portland remain classified. Dofederal agencies knowof away to quickly extract encryption keys from SIM cards? (On the other hand, its also possible that cell phone cloning doesnt describe SIM cloning at all but something else instead, likeextractingfiles fromthe phone itself instead of data from the SIM card.)

Assuming the feds were able to extractthe encryption key fromtheir targets SIM card, they could give the phone back to their target and then spy on all their targets SMS text messages and voice calls going forward.To do this, they would have to be physically close to their target, monitoring theradio waves for traffic between their targets phone and a cell tower. When they see it, they can decrypt this traffic using the key they stole from the SIM card. This wouldalso fit with what the anonymous former intelligence officialstold The Nation; they said the surveillance waspart of a Low Level Voice Intercept operation, a military term describing audio surveillanceby monitoringradio waves.

If you were arrested in Portland and youre worried that federal agents may have cloned your SIM card while you were in custody, it would be prudent to get a new SIM card.

Evenif law enforcement agencies dont clone a targets SIM card, they could gather quite a bit of information aftertemporarily confiscating the targets phone.

They could power off the phone, pop out the SIM card, put it in a separate phone, and then power that phone on.If someone sends the target an SMS message (or texts a group that the target is in), the feds phone would receive that message instead of the targets phone. And if someone called the targets phone number, the feds phone would ring instead. They could also hack their targets online accounts, so long as those accounts support resetting the password using a phone number.

But, in order to remain stealthy, they would need to power off their phone,put the SIM card back in their targets phone, and power that phone on again before before returning it, which would restore the original phones access to the targets phone number, and the feds would lose access.

The simplest and best way to protect against SIM cloning attacks, as well as eavesdropping by stingrays, controversial phone surveillance devices that law enforcement has a history of using against protesters, is to stop using SMS and normal phone calls as much as possible. These are not and have never been secure.

Instead, you can avoid most communication surveillance by using an end-to-end encrypted messaging app. The Signal app is a great choice. Its easy to use and designed to hold as little information about its users as possible. It also lets Android users securely talk with their iPhone compatriots. You can use it for secure text messages, texting groups, and voice and video calls. Heres a detailed guide to securing Signal.

Signal requires sharing your phone number with others to use it. If youd rather use usernames instead of phone numbers, Wire and Keybase are both good options.

If you use an iPhone and want to securely talk to other iPhone users, the built-in Messages and FaceTime apps are also encrypted. WhatsApp texts and calls are encrypted too. Though keep in mind that if you use Messages or WhatsApp, your phone may be configured to save unencrypted backups of your text messages to the cloud where law enforcement could access them.

You cant use an encrypted messaging app all by yourself, so its important to get all of your friends and fellow activists to use the same app. The more people you can get to use an encrypted messaging app instead of insecure SMS and voice calls, the better privacy everyone has. (For example, I use Signal to text with my parents, and you should too.)

None of these encrypted messaging apps send data over insecure SMS messages or voice calls, so SIM cloning and stingrays cant spy on them. Instead they send end-to-end encrypted data over the internet. This also means that the companies that run these services cant hand over your message history to the cops even if they want to; police would instead need to extract those messages directly from a phone that sent or received them.

Another important consideration is preventing cops from copying messages directly off your phone. To prevent this, make sure your phone is locked with a strong passcode and avoid biometrics (unlocking your phone with your face or fingerprint) or at least disable biometrics on your phone before you go to a protest. You also might consider bringing a cheap burner phone to a protest and leaving your main phone at home.

Another way to protect against certain forms of mobile phone spying is to lock your SIM card by setting a four- to eight-digit passcode known as a SIM PIN. Each time your phone reboots, youll need to enter this PIN if you want SMS, voice calls, and mobile data to work.

An iPhones SIM unlocking screen

Photo: Micah Lee

If you type the wrong PIN three times, your SIM card will get blocked, and youll need to call your phone carrier to receive a Personal Unblocking Key (PUK) to unblock it. If you enter the wrong PUK eight times, the SIM card will permanently disable itself.

With a locked SIM, youll still be able to use apps and Wi-Fi but not mobile data or cellphone service. So make sure that you securely record your SIM PIN somewhere safe, such as a password manager like Bitwarden, 1Password, or LastPass, and never try to guess it if you cant remember it. (You can always click Cancel to get into your phone without unlocking your SIM card. From there, open a password manager app to look up your PIN, and then reboot your phone again to enter it correctly. Ive done this numerous times myself just to be sure.)

If you want to lock your SIM card, first youll need to know the default SIM PIN for your cellphone company. For AT&T, Verizon, and Google Fi, its 1111; for T-Mobile, Sprint, and Metro, its 1234. If you use a different phone carrier, you should be able to search the internet to find it. (I would avoid guessing if you type the wrong default PIN three times, your SIM card will get blocked.)

Once you know your default PIN, heres how to you set a new one:

Now if law enforcement gets physical access to your phone, they shouldnt be able touse your locked SIM card without your PIN. If they guess your PIN incorrectly three times, the SIM card will block itself, and theyd need to convince your cellphone company to hand over the PUK for your SIM card in order touse it. If they guess the wrong PUK too many times, the SIM will permanently disable itself.

Here is the original post:
One Way to Prevent Police From Surveilling Your Phone - The Intercept

Read More..

Cloud Encryption Market to Witness Astonishing Growth by 2026 | Ciphercloud, Gemalto, Hytrust and more – Crypto Daily

A research report on the Cloud Encryption Market 2020 Industry Research Report is being published by Stats and Reports. This is a key document as far as the clients and industries are concerned to not only understand the competitive market status that exists currently but also what future holds for it in the upcoming period, i.e., between 2020 and 2026. It has taken the previous market status of 2013 2018 to project the future status. The report has categorized in terms of region, type, key industries, and application.

Major Geographical Regions

The study report on Global Cloud Encryption Market 2020 would cover every big geographical, as well as, sub-regions throughout the world. The report has focused on market size, value, product sales and opportunities for growth in these regions. The market study has analyzed the competitive trend apart from offering valuable insights to clients and industries. These data will undoubtedly help them to plan their strategy so that they could not only expand but also penetrate into a market.

North America is expected to hold dominant position in the global Cloud Encryption market, owing to increasing collaboration activities by key players over the forecast period.

A sample of report copy could be downloaded by visiting the site: https://www.statsandreports.com/request-sample/295648-global-cloud-encryption-market-size-status-and-forecast-2019-2025

The researchers have analyzed the competitive advantages of those involved in the industries or in the Cloud Encryption industry. While historical years were taken as 2013 2018, the base year for the study was 2018. Similarly, the report has given its projection for the year 2020 apart from the outlook for years 2020 2026.

Top Leading Companies and Type

Like any other research material, the report has covered key geographical regions such as Europe, Japan, United States, India, Southeast Asia and Europe. Researchers have given their opinion or insights of value, product sales, and industry share besides availability opportunities to expand in those regions. As far as the sub-regions, North America, Canada, Medico, Australia, Asia-Pacific, India, South Korea, China, Singapore, Indonesia, Japan, Rest of Asia-Pacific, Germany, United Kingdom, France, Spain, Italy, Rest of Europe, Russia, Central & South America, Middle East & Africa are included.

Major players in the report included are Ciphercloud, Gemalto, Hytrust, IBM, Netskope, Secomba, Skyhigh Networks, Sophos, Symantec, Thales E-Security, Trend Micro, Vaultive, TWD Industries AG, Parablu.

Types covered in the Cloud Encryption industry are Infrastructure-as-a-Service (IaaS), Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS).

Applications covered in the report are Banking, Financial Services, and Insurance (BFSI), Healthcare, Telecom and IT, Government and Public Utilities, Aerospace and Defense, Retail and Others.

Geographical Scope of this report includes:

Report Aims

The objective of the researchers is to find out the sales, value, and status of the Cloud Encryption industry at the international levels. While the status covers the years of 2013 2018, the forecast is for the period 2020 2026 that will enable market players to not only plan but also execute strategies based on the market needs.

We are currently offering Quarter-end Discount to all our high potential clients and would really like you to avail the benefits and leverage your analysis based on our report.

To Get This Report At Beneficial Rates: https://www.statsandreports.com/check-discount/295648-global-cloud-encryption-market-size-status-and-forecast-2019-2025

Cloud Encryption Market

The study wanted to focus on key manufacturers, competitive landscape, and SWOT analysis for the Cloud Encryption industry. Apart from looking into the geographical regions, the report concentrated on key trends and segments that are either driving or preventing the growth of the industry. Researchers have also focused on individual growth trends besides their contribution to the overall market.

Target Audience of the Global Cloud Encryption Market in Market Study:

Key Consulting Companies & AdvisersLarge, medium-sized, and small enterprisesVenture capitalistsValue-Added Re-sellers (VARs)Third-party knowledge providersInvestment bankersInvestors

Buy Full Copy Global Cloud Encryption Report 2020-2026 @ https://www.statsandreports.com/placeorder?report=295648-global-cloud-encryption-market-size-status-and-forecast-2019-2025&type=SingleUser

** The market is evaluated based on the weighted average selling price (WASP) and includes the taxes applicable to the manufacturer. All currency conversions used in the creation of this report were calculated using a certain annual average rate of 2020 currency conversion.

Crucial points encompassed in the report:

In the end, Cloud Encryption Market Report delivers a conclusion that includes Breakdown and Data Triangulation, Consumer Needs/Customer Preference Change, Research Findings, Market Size Estimation, Data Source. These factors will increase the business overall.

Major queries related Global Cloud Encryption Market with covid-19 effect resolves in the report:

1. How market players are performing in this covid-19 event?2. How the pricing of essential raw material and related market affects Cloud Encryption market.3. Is covid-19 pandemic already affected on projected region or what will be the maximum impact of covid-19 in region?4. What will be the CAGR growth of the Cloud Encryption market during the forecast period?5. In 2026 what will be the estimated value of Cloud Encryption market?

About Us

Stats and Reports is a global market research and consulting service provider specialized in offering wide range of business solutions to their clients including market research reports, primary and secondary research, demand forecasting services, focus group analysis and other services. We understand that how data is important in todays competitive environment and thus, we have collaborated with industrys leading research providers who works continuously to meet the ever-growing demand for market research reports throughout the year.

Contact:

Stats and ReportsMangalam Chamber, Office No 16, Paud RoadSankalp Society, Kothrud, Pune, Maharashtra 411038Phone: +1 650-646-3808Email: [emailprotected]Website: https://www.statsandreports.comFollow Us on: LinkedIN | Twitter |

View original post here:
Cloud Encryption Market to Witness Astonishing Growth by 2026 | Ciphercloud, Gemalto, Hytrust and more - Crypto Daily

Read More..

Zoom is being sued over its cloud storage practices – TechRadar

Popular video conferencing platform Zoom has been hit with lawsuit alleging it of patent infringement around its cloud storage practices for recorded content.

Specifically, Zoom is accused of running afoul of patent law because it enables users to record meetings, save the video to cloud storage, and then download the content later.

The suit has been filed by Rothschild Broadcast Distribution Systems, which filed the patent (US Patent No. 8,856,221) in 2011long after the technology for storing multimedia content in the cloud and distributing it on demand had been developed. The company has so far filed more than 25 suits against companies including Disney and World Wrestling Entertainment.

Rothschild is seeking both an award for damages as well as a court order halting Zoom from continuing to infringe on its patent for cloud storage and distribution.

The company is based in the East District of Texas, which is a favorite jurisdiction for so-called patent trolls because of its favorable legal protections for plaintiffs.

The lawsuit, filed in the District of Colorado, puts Zoom in a difficult position. On the one hand, it is highly unlikely that Rothschild could win if the lawsuit was challenged in court. The Supreme Court has ruled that abstract ideas are not eligible to be patented if they simply move existing technology onto a computer.

In a similar lawsuit that Rothschild filed concerning the same patent, the company dismissed its claim as soon as the defendant in that case challenged the suit.

However, challenging the lawsuit is likely more costly for Zoom than simply settling with Rothschild out of court. It remains unclear how much such a settlement could cost, since the company has not disclosed previous agreements with those it has sued. Whether Zoom decides to stand its ground or make the lawsuit disappear quickly, the whole affair is likely to be expensive.

Via LawStreetMedia

Originally posted here:
Zoom is being sued over its cloud storage practices - TechRadar

Read More..

Cloud At The Edge, GPU Storage And LTO Gen 9 – Forbes

Abstract digital futuristic background. 3D rendered image.

This piece looks at developments and digital storage partners for public cloud company edge services, in particular AWS Outposts.We also look at a VAST Data GPU oriented AI storage offering as well as the introduction of LTO 9 magnetic tape technology in late 2020.

Public cloud companies have created services to bring their services to the edge of the network as well as in their hyperscale data centers.One example is AWS Outposts.Outposts was first announced in 2018, with general availability announced in December 2019.

Recently a number of storage companies made announcements on AWS Outposts partnerships.AWS Outpostsextends AWS infrastructure, services, APIs, and tools to customer datacenters, co-location spaces, or on-premises facilities.AWS Outposts is meant to provide low latency access to on-premises applications or systems and local data processing for local storage needs in a hybrid cloud storage environment.

These AWS Outpost connections included Zadara, who announced a partnership with data management provider Storage IT offering storage as a service solution in the AWS Marketplace.Qumulo also launched on AWS Outposts to enable file storage and data management.Qumulo on AWS Outposts allow customers to connect their file data to AWS and run AWS services.

VAST Data introduced its Universal Storage Platform that used Intels Optane SSDs in the front end of a storage system as a cache for data stored on quad level cell (QLC) SSDs in 2019.The company said that by using NVMe based Optane SSDs and QLC flash that they could bring the cost of a flash memory storage system to close to that of a HDD storage system.The company recently announced the availability of its next generation storage architecture that it calls LightSpeed.LightSpeed combines VASTs NAS appliance with NVIDIA GPU-based and AI processor-based computing for AI applications.

VASTs announcement says that GPUDirect enables customers running NVIDIA GPUs to accelerate access to data and avoid extra data copies between storage and the GPU by avoiding the CPU and CPU memory altogether as shown in the image below. In initial testing, VAST demonstrated over 90GB/s of peak read throughput via a single NVIDIA DGX-2 client, nearly 3X the performance of VAST's NFS-over-RDMA and nearly 50X the performance of standard TCP-based NFS.

GPU Direct storage access for AI applications using GPUs

The company says that LightSpeed uses a disaggregated, shared everything (DASE) architecture (using elements from its Universal Storage platform) to lower the costs of SSD storage and thus eliminate the need for storage tiering.LightSpeed doubles the performance of prior VAST storage solutions.It also provides NFS support for NVIDIA GPUDirect Storage.

The LTO program technology provider companies (HPE, IBM and Quantum), which manage the most popular digital magnetic recording tape format, officially announced the LTO 9 specification.LTO 9 tape cartridges support 18 TB of native storage capacity (less than the 24 TB native capacity for LTO 9 that was on prior LTO roadmaps).Whereas the most recent generations generally doubled storage capacity about every 2.3 years, this is a 50% increase from 12 TB native storage capacity for LTO 8.The LTO program says that they redid the LTO roadmap to reflect the changed capacity for LTO 9 and that following generations will double with each generation as shown below.

Updated LTO Magnetic Tape Roadmap

TheLTO generation 9 specifications include previously introduced features, such as multi-layer security support via hardware-based encryption, WORM (Write-Once, Read-Many) functionality and support for Linear Tape File System (LTFS). The new LTO generation 9 specifications include full backward read and write compatibility with LTO generation 8 cartridges.Quantum said that they will make LTO 9 tape drives available for their Scalar Tape Libraries and StorNext AEL archive systems beginning in December 2020.Other tape storage system vendors will be announcing LTO 9 support for products in late 2020.

AWS Outpost storage partners Zadara and Qumulo enable on-premises storage partnered with public cloud service.VAST Data introduces GPU AI high performance storage.The LTO program introduces LTO 9 tape technology with vendors providing products by late 2020.

See the rest here:
Cloud At The Edge, GPU Storage And LTO Gen 9 - Forbes

Read More..

How to Access S3 Buckets from Windows or Linux – ITPro Today

S3, Amazons cloud-based object storage service, is designed primarily for storing data that is used by applications running directly in the cloud. However, there are situations where you may want to access S3 buckets directly from your PC. You might want to do this to upload files from your PC to S3 without using the AWS Console, for example. Or, you may want to be able to monitor changes to S3 data from an application running on your local PC.

For purposes such as these, being able to access S3 data directly from your PC comes in handy. This article explains how to make S3 files available locally on both Windows and Linux using rclone, a free and open source tool for syncing cloud storage to local computers.

There are various other tools available for achieving this goal. I like rclone, however, both because its open source and because it works with any major operating system. Although there are some minor differences in the way you use the tool to access S3 data on Windows as compared to Linux, the basic process is the same regardless of which operating system youre running.

Following are the steps for using rclone to access S3 data from Linux or Windows.

Installing rclone is quite simple. You can download it for Windows or any version of Linux. On most Linux distributions, you also have the option of installing directly from your package manager using a command such as the following (for Ubuntu):

Or, you can download and run a Bash script to install rclone for you:

The latter approach may be desirable if you want a later version of rclone than the one offered in your distributions package repositories; otherwise, its better to use an official package from the repositories, because rclone will then be updated automatically for you whenever a new package version becomes available.

Rclone is a command-line tool, so youll need to open up a command shell or terminal to run it.

Once in the shell, you can run rclone directly with a simple rclone (or rclone.exe on Windows) command if the application is in your path--which it probably is if you installed it on Linux using a package.

If instead you just downloaded rclone as a ZIP file, you will have to unpack it, then use the cd command to navigate to the directory where the rclone files are located.

Once there, a simple ./rclone config (on Linux) or .rclone.exe config (on Windows) will start the program.

Rclone will then ask you a variety of configuration questions, including your AWS credentials for the S3 bucket you want to access. The configuration data will vary depending on how your S3 bucket is set up, but in general the default options should work.

After youve completed configuration, youre ready to use rclone to access S3 buckets.

Rclone offers about a dozen commands that you can use to interact with S3 data. For example, to list the contents of a bucket, use the ls command:

(If youre on Windows, replace rclone with rclone.exe.)

In this example, bucket-name is the name of your S3 bucket.

Likewise, to copy a file, use the copy command:

A full list of rclone commands is available on the rclone website. Keep in mind, however, that not all of them will work with S3 data. For example, you cant use the mkdir command (which would create a new directory) with an S3 bucket because S3 doesnt support directories.

Rclones built-in commands for interacting with data are handy if you just need to copy or access some files manually. But what if you want to automate interaction with your S3 data, or access it using commands that are not supported by rclone?

In that case, you can use the rclone mount command to mount your S3 bucket as a directory. That way, you can interact with your S3 data just as you would any other data stored locally on your computer.

To mount an S3 bucket with rclone on Linux, use a command like:

Note that you may need to run this command as root. Youll also need to make sure the mount point (/mnt/some-dir in the example above) exists before you run the command. (If it doesnt, use mkdir on Linux to create it.)

The process is similar on Windows, with one major difference: You first need to install WinFsp (find the installer here) before you can mount an S3 bucket. Once WinFsp has been installed, you can mount your S3 bucket as a directory with:

In this case, your mount point (C:somedir in the example) should be a directory that does not yet exist.

Whether you use Windows or Linux, rclone offers a free and straightforward way to access S3 buckets from your local computer. However, there are some caveats to keep in mind.

One is that Amazon charges you a fee every time you create or modify a file in an S3 bucket. This means that, if you perform a large number of file operations via rclone on S3 data, you may end up with a substantial cloud bill.

A second consideration to weigh is that the performance of your S3 data when you access it from your PC may be limited due to network latency. Even if you mount your bucket as a local directory, expect a delay when you interact with the data.

For both of these reasons, you may end up shooting yourself in the foot if you try to use S3 buckets as a cheap way to back up all of the data from your PC, or as a personal file-sharing service. In other words, dont try to use the method described above to turn S3 into something like Dropbox or Google Drive, which are better suited to situations where you need fast, cost-efficient integration between your local file system and cloud storage. Even though accessing S3 data from a local computer is relatively easy, the performance and cost implications make it impractical to do this on a large-scale or recurring basis.

Still, if you need a fast and simple way to access S3 data from your computer in order to copy files or use a certain application on a one-off basis, rclone makes it easy to do so on Windows, Linux and virtually every other operating system you can find.

View post:
How to Access S3 Buckets from Windows or Linux - ITPro Today

Read More..

Microsofts storage dream: a hard disk drive the size of a wardrobe with Samsung Galaxy S20 parts – TechRadar

At the company's annual Ignite event for developers, Microsoft shed more light on the work it's doing with holographic storage.

The firm's research arm has gone back to the drawing board to rethink storage at a hyperscale level, starting by exploding the first dogma: that storage had to come in a 2.5-inch or 3.5-inch form factor.

After all, theres no hard and fast rule saying that data center storage has to be based on consumer hard disk drives - or even enterprise SSDs. New formats like the ruler SSD form factor offer some innovation, but dont really break the mould.

The smallest unit of deployment in cloud storage, say the researchers, is actually the storage rack, which is about the size of a cupboard and allows the designers to think of new hardware at rack scale.

According to a Microsoft blog post, this allows components to be efficiently shared across the entire rack and could end up shifting the paradigm for web hosting, IaaS and PaaS.

While Project Silica - another of Microsofts moonshot storage projects - looked at storing data for a long time using a write-only, read-many archival format, project HSD (for Hologram Storage Device) looks at how so-called hot data can be accessed faster and stored in even smaller volumes.

In the blog post, Microsoft shared an illustration highlighting the formidable rise in resolution of commodity camera sensors, which has grown from 1-megapixel to more than 100-megapixels in less than two decades.

Project HSD rides on the coat tails of this improvement, exploiting the resolution growth to simplify the (optical) hardware and moving the complexity to the software.

The 108-megapixel ISOCELL Bright HMX camera sensor was introduced more than one year ago by Samsung, in partnership with Xiaomi. It not only has a large image sensor but was also the first to break the 100-megapixel barrier, as it's used in phones including the Samsung Galaxy S20 Ultra and the Xiaomi Mi CC9 Pro Premium.

But Samsung wants to reach even greater heights and executive Yongin Park has already confirmed that a 600-megapixel sensor is the goal.

Someone at Microsoft Research will certainly take note, given that pairing consumer optics and Azure-based AI can significantly increase not only the storage density of HSD but also read/write speeds and access times.

Excerpt from:
Microsofts storage dream: a hard disk drive the size of a wardrobe with Samsung Galaxy S20 parts - TechRadar

Read More..

There is a hole in my cloud bucket – Fudzilla

Dear Liza, dear Liza

A Comparitech security report claims that nearly six percent of all Google Cloud buckets are vulnerable to unauthorised access due to misconfiguration issues.

Buckets, in cloud storage, are the basic containers that are used to hold the data. Everything that a user stores in cloud storage must be contained in a bucket. Admins can use these containers to organise their data and to control access to it. However, unlike folders and directories, they cannot nest one bucket into another bucket.

Writing in his bog, Comparitech's Paul Bischoff revealed that its team attempted to search for open bucket on the web. It started by scanning the web using a tool which is easily available to admins and hackers.

In its web search, the researchers looked for Alexa's top 100 web domains, in combination with some common words, such as "db", "database", and "bak" used by admins when naming their buckets.

Through this web scan, the research team was able to discover 2,064 Google Cloud buckets in about 2.5 hours.

After analysing all 2,064 buckets, the researchers found that 131 of them - nearly six percent - were misconfigured and vulnerable to unauthorised access.

According to Comparitech, the exposed data included nearly 6,000 scanned documents containing confidential information, such as passports details and birth certificates of children in India. A database belonging to a Russian web developer was also found that leaked developer's chat logs and email server credentials.

Bischoff warns that uncovering exposed cloud databases on internet is not difficult . Google cloud storage has naming guidelines that make open buckets easy to find. Such buckets can contain sensitive files, source code, credentials and databases, which can be illegally accessed by malicious actors.

According to Bischoff, admins can check if their bucket is exposed by using gsutil (Google's official command-line tool) or BucketMiner tool to scan the web. Scanning for company's name on Google and Amazon infrastructure will display some filenames, images, or other stats, suggesting the bucket is open.

View post:
There is a hole in my cloud bucket - Fudzilla

Read More..

Red Hat shifts automated data pipeline into OpenShift Blocks and Files – Blocks and Files

Red Hat today released OpenShift Container Storage 4.5 to deliver Kubernetes services for cloud-native applications via an automated data pipeline.

Mike Piech, Red Hat cloud storage and data services GM, Piech, said in his launch statement: As organizations continue to modernise cloud-native applications, an essential part of their transformation is understanding how to unlock the power of data these applications are generating.

With Red Hat OpenShift Container Storage 4.5 weve taken a significant step forward in our mission to make enterprise data accessible, resilient and actionable to applications across the open hybrid cloud.

OpenShift is RedHats container orchestrator, built atop Kubernetes. Ceph open source storage provides a data plane for the OpenShift environment.

The automated data pipeline is based on notification-driven architectures, and integrated access to Red Hat AMQ Streams and OpenShift Serverless. AMQ Streams is a massively scalable, distributed, and high-performance data streaming platform based on the Apache Kafka project.

OpenShift Serverless enables users build and run applications so that, when an event-trigger occurs, the application automatically scales up based on incoming demand, or scales to zero after use.

Red Hat says that, with the recent release of OpenShift Virtualization, users can host virtual machines and containers on a single, integrated platform which includes OpenShift Container Storage. This is what VMware is doing with its Tanzu project.

New features in OpenShift Container Storage 4.5 include:

Originally posted here:
Red Hat shifts automated data pipeline into OpenShift Blocks and Files - Blocks and Files

Read More..

Seagate gets into object storage with new CORTX software Blocks and Files – Blocks and Files

Seagate is entering the object storage business with brand new CORTX software.

The disk drive maker aims to build a developer community for the open source software and has published a reference architecture for use in a Lyve Drive Rack.

Announcing the news today at Seagate Datasphere, the company said CORTX gives developers and partners access to mass capacity-optimised data storage architectures. CORTX use cases include artificial intelligence, machine learning, hybrid cloud, the edge and high-performance computing.

The object storage market has seen two entrants in two weeks Dell EMC has joined in with ObjectScale software.

So why does the world need another object storage software technology? Seagates Ken Claffey, GM for Enterprise Data Solutions, said: CORTX brings something different to other object stores in that it will uniquely leverage HDD innovations such as REMAN to reduce the likelihood of rebuild storms, HAMR to enable the largest capacity/lowest cost per bit next gen devices, and multi-actuator to retain IOPS per capacity ratios.CORTX and the community are focused on such capabilities that are required in mass capacity deployments.

HAMR is Seagates Heat-Assisted Magnetic Recording drive, due to ship at 20TB capacity by year-end, and a pathway towards 40TB HDD capacities. Multi-actuator drives have two sets of read-write heads and logically divide a disk drive into two halves that perform read/write operations concurrently to increase overall IO bandwidth.

Lyve Drive is a series of integrated, modular data storage drives, carriers and receivers for multi-stage workflow processes.

Jacques-Charles Lafoucriere, program manager at The French Alternative Energies and Atomic Agency, an early CORTX adopter said: CORTX can very nicely work with storage tools and many different types of storage interfaces. We have effectively used CORTX to implement a parallel file system interface (pNFS) and hierarchical storage management tools. CORTX architecture is also compatible with artificial intelligence and deep learning (AI/DL) tools such as TensorFlow.

Gary Grider, HPC division Leader at Los Alamos National Lab, also said: I am very excited to see what Seagate is doing with CORTX and am optimistic about its ability to lower costs for data storage at the exabyte scale. We will be closely following the open source CORTX and will participate in the community built around it, because we share Seagates goal of economically efficient storage optimised for massive scalability and durability.

Toyota and Fujitsu are also early CORTX adopters.

Shipments of Lyve Drive Rack and the 20TB HAMR drives are scheduled to begin in December.

Follow this link:
Seagate gets into object storage with new CORTX software Blocks and Files - Blocks and Files

Read More..

Ceph scales to 10 billion objects Blocks and Files – Blocks and Files

Ceph, the open source integrated file, block and object storage software, can support one billion objects. But can it scale to 10 billion objects and deliver good and predictable performance?

Yes, according to Russ Fellows and Mohammad Rabin of the Evaluator Group who set up a Ceph cluster lab and, by using a huge metadata cache, scaled from zero to 10 billion 64KB objects.

In their soon-to-be published white paper commissioned by Red Hat, Massively Scalable Cloud Storage for Cloud Native Applications, they report that setting up Ceph was complex without actually using that word. We found that, because of the many Ceph configuration and deployment options, it is important to consult with an experienced Ceph architect prior to deployment.

The authors suggest smaller organisations with smaller needs can use Ceph reference architectures. Larger organisations with larger needs better work with Red Hat or other companies with extensive experience in architecting and administering Ceph.

Analysis of unstructured data, files and objects is required to discern patterns and gain actionable insights in a businesss operations and sales.

These patterns can be discovered through analytics and by developing and applying machine learning learning models. Very simply, the more data points in an analysis run, the better the resulting analysis or machine learning model.

It is a truism that object data scales more easily than file storage because it has a single flat address space whereas files exist in a file-folder structure. As the number of files and folders grow, the file access metadata also grows in size and complexity and more so than object access metadata.

File storage is generally used for applications that need faster data access than object storage. Red Hat wants to demonstrate both the scalability of object storage in Ceph and its speed. The company has shown Ceph can scale to a billion objects and perform well at that level via metadata caching on NVMe SSDs.

However, Red Hat wants to go further and has commissioned the Evaluator Group to scale Ceph tenfold, to 10 billion objects, and see how it performed.

The Evaluator test set-up had six workload generating clients driving six object servers. Each pair of these accessed, in a split/shared-nothing configuration, a Seagate JBOD containing 106 x 16TB Exos nearline disk drives; 5PB of raw capacity in total spread across three storage JBODS.

Each object server had dual Xeon 18-core CL-6154 processors, 384GB of DRAM, six Intel DC P4610 NVMe 7.6TB write-optimised NAND SSDs for metadata caching, and Intel memory DIMMs.

Ceph best practice recommends not exceeding 80 per cent capacity and so the system was sized to provide 4.5PB of usable Ceph capacity. Each 64KB object required about 10KB of metadata, meaning around 95TB of metadata for the total of 10 billion objects.

The Evaluator Group testers ran multiple test cycles, each performing PUTS to add to the object count, then GETS and, thirdly, a mixed workload test. The performance of each successive workload was measured to show the trends as object counts and capacity both increased.

The measurements of GETs (reads) and PUTs (writes) performance showed a fairly linear pattern as the object count increased. PUT operations showed linear performance up to 8.8 billion objects; 80 per cent of the systems usable Ceph capacity, and then dropped off slightly. GET operations showed a dip to a lower level around 5 million objects and a more pronounced decline after the 8.8 million objects level.

GET performance declined once the metadata cache capacity was exceeded (yellow line on chart) and the clusters usable capacity surpassed 80 pr cent of actual capacity. Once the caches capacity was surpassed the excess metadata had to be stored on disk drives, and accesses were consequently much slower.

Performance linearity at this level would requite a larger metadata cache.

The deep scrubbing dip on the chart occurred because a Ceph parameter set for deep scrubbing, to help with data consistency, came into operation at 500 million objects. Ceph was reconfigured to stop this.

The system exhibited nearly 2GB/s of sustained read throughput and more than 1GB/sec of sustained write throughput.

The Evaluator Group also tested how Ceph performed with up to 20 million 128MB objects. In this test the metadata cache capacity was not exceeded and performance was linear for reads and near-linear for writes as the object count increased;

There is less metadata with the smaller number of objects, meaning no spill over of metadata to disk. The GET and PUT performance lines are both linearish deterministic is the Evaluator Groups term, with performance of 10GB/sec for both operation types.

Suppliers like Iguazio talk about operating at the trillion-plus file level. Thats extreme but todays extremity is tomorrows normality in this time of massive data growth That suggests Red Hat will have keep going further to establish and then re-establish Cephs scalability credentials.

Next year we might see a 100 billion object test and, who knows, a trillion object test could follow some day.

Read more here:
Ceph scales to 10 billion objects Blocks and Files - Blocks and Files

Read More..