Page 23«..1020..22232425..3040..»

Why won’t Google increase its free 15GB cloud storage? – Pocket-lint

Key Takeaways

It seems like everyone and their dog has a Google account nowadays. Its the most popular email service around, with over a billion daily users, but its usefulness doesnt end there. Its used as a hub for all the Google services, allows easy syncing of Google Chrome between devices, and enables hundreds of other quality-of-life features.

One of the most handy perks that getting a Google account gets you is 15GB of free cloud storage available on Google Drive. Sure, that storage is shared between your Gmail, Google Drive, and Google Photos, but its still useful for keeping your backup, email attachments, and a few documents around and ready to share online.

The 15GB limit between all the Google services was introduced back in 2013, and the bar has not been raised since. On the contrary, over the years, the company opted to remove some of the advantages that its cloud storage offered, such as unlimited photo backup for Google Pixel users, essentially making it a worse deal than it used to be all these years ago.

That begs a question: Why doesnt the free storage tier change? Over the years, prices of storage have gone down significantly, so Google should -- at least theoretically -- be able to offer much more storage to Gmail users. Unfortunately, its not as simple as that, and there are a few quite good reasons that the company is sticking to its 15GB limit.

Lets talk about the expenses first. Its true that storage prices have gone significantly down in the last few years, with both hard drives and SSDs significantly coming down in price per gigabyte. However, this fact doesnt take into account the growth of Google itself, rising prices of electricity and server space, all of which are contributing to significantly increasing costs of maintaining the cloud storage that the company offers.

In the blog post from 2021 when Google announced the end of unlimited photos storage, the company mentioned that more than 4.3 million GB are added to Google servers by users every day. This number increases significantly every year even without making the free storage tier bigger, so the operating costs for Google are tremendous. So, the biggest and most obvious reason that the company doesnt make their free storage tier bigger is the cost.

Plus, 15GB is still one of the bigger allowances around, so Google doesnt see the need to compete in this space anymore, and doing the bare minimum is usually preferable for giant companies to minimize their costs.

Speaking of doing the bare minimum: Most users really do not need more than 15GB of free storage.

For tech enthusiasts, 15GB of storage might feel like a pittance, but for a casual user whos only backing up some photos from their Android phone and getting a few emails a day, 15GB is really much more than enough. That's especially true if you only use your Google account for Gmail. Seeing as the maximum attachment size is 25MB, you could easily store 600 emails with the biggest attachment possible before running out of space.

Thats quite an unrealistic scenario, though, so lets see something more day-to-day.

I got my personal Gmail account around 2010, and ever since I have probably never deleted more than 50 emails. I use this account for almost everything, with tens of emails every day that end up simply rotting in the inbox -- a terrible habit, I know, but who has the time to take care of their inbox? Whats the result? Over these years, with more than 10,000 unread emails and probably more read than that, my Gmail has grown to 1.74GB. I could be as disorganized as I want for the rest of my life, and my Gmail account wouldn't touch the free 15GB limit anyway.

Of course, that's different if you want to use Google Photos as your backup or Google Drive to share and store some files, but for the most basic uses, 15GB of free cloud storage really is enough for most people.

Ultimately, though, the reason why Google doesnt want to give you more free cloud storage is really simple: It wants to make money selling you this service. Especially now that cloud storage is getting even more popular and widespread, its difficult to imagine Google taking a step back and offering more free storage, considering the push toward using Google One.

Of course, its not all bad in the paid cloud storage world. I know because Ive been using Google One for a while now. The cheapest tier is quite affordable at $1.99 per month and gets you not only 100GB of cloud storage across Google services, but some additional goodies as well. Were talking about the ability to share your storage space with up to five people, as well as more editing tools in Google Photos.

However, the real fun starts when you choose the highest-priced Google One plan called AI Premium. Not only does it include 2TB of cloud storage, but more importantly, it also lets you use Google Gemini Advanced. Its an improved Gemini AI model, which works both as a standalone chatbot, but is also available in Google Docs, Gmail, and other Google services if you buy the highest tier of Google One subscription.

So, ultimately, you shouldnt expect Google to offer more free cloud storage any time soon, as it would significantly harm the companys business and discourage users from buying the services that Google wants to push.

You really shouldnt worry that much about the lack of free cloud storage available. Ultimately, using Googles (or anyone elses for that matter) cloud solution is not only not very safe, but its also not the best practice if you value the safety of your data. Instead, if you feel like 15GB is not enough for you, you should look into getting yourself your own Network-Attached Storage, or maybe even setting up your own cloud storage solution. It would not only let you create a cloud storage service thats much more spacious than the ones offered by Google or other companies, but thats also, ultimately, much more affordable in the long run.

Read more from the original source:
Why won't Google increase its free 15GB cloud storage? - Pocket-lint

Read More..

Google Cloud NEXT 2024: The hottest news, in brief – The Stack

Google Clouds first Arm-based CPU for the data centre, a host of new compute and storage services that dramatically improve generative AI performance, a security-centric Chrome offering, and a flurry of enterprise-focused Workspace updates that take the fight to Microsoft 365.

Also, AI in everything, including Gemini and Vertex AI in data warehouse BigQuery (with fine tuning) in public preview, for "seamless preparation and analysis of multimodal data such as documents, audio and video files." (nb: Vector search came to Big Query in preview in February.)

Those were among the updates set to get serious airtime at Google Cloud NEXT in Las Vegas this week. The Stack will share more considered analysis about some of the news coming through in coming days, along with interviews with executives and customers but heres an early sample from a blockbuster set of press releases, GitHub repositories and blogs...

Unlike traditional email and productivity solutions, Gmail and Workspace were built from the very beginning on a cloud-native architecture, rooted in zero-trust principles, and augmented with AI-powered threat defenses.

So said Google pointedly in the wake of the CSRBs blistering indictment of Microsofts security, which noted pointedly that Redmond had designed its consumer MSA identity infrastructure more than 20 years ago.)

Workspace, Googles suite of collaboration and productivity applications, has approximately 10 million paying users. That makes it a minnow compared to the 300 million+ paid seats Office 365 boasted back in 2022.

It could be more of a threat to Microsoft.

A series of new features unveiled today may make it one. They include a new $10/user AI Security add-on that will let Workspace admins automatically classify and protect sensitive files and data using privacy-preserving AI models and Data Loss Prevention [DLP] controls trained for their organization a Google spokesperson told The Stack that were extending DLP controls and classification labels to Gmail in beta.

Pressed for detail, they told us that these will include:

Also coming soon: Experimental support for post-quantum cryptography (PQC) in client-side encryption [with partners] Thales and Fortanix

A new generative AI service called Google Vids baked into Google Workspace may get more headlines. Thats a video, writing, production, and editing assistant that will work in-browser and sit alongside Docs, Sheets, and Slides from June. Less of a serious competitor for Premier Pro and more a templating assistant that pieces together your first draft with suggested scenes from stock videos, images, and background music.(The Stack has clarified that users can also upload their own video, not just use stock...)

Other Workspace updates today:

Chat: Increased member capacity of up to 500,000 in Spaces for those bigger enterprise customers. Also new: GA messaging interoperability with Slack and Teams through Google-funded Mio, and various AI integrations and enhancements across Docs, Sheets etc.

NVIDIA CEO Jensen Huang anticipates over $1 trillion in data center spending over the next four years as infrastructure is heavily upgraded for more generative AI-centric workloads. This isnt just a case of plumbing in more GPUs Google Cloud is showcasing some real innovations here.

It boasted significant enhancements at every layer of our AI Hypercomputer architecture [including] performance-optimized hardware, open software and frameworks

Top of the list and hot off the press:

Various other promises of faster, cheaper compute also abound. But its storage and caching where GCPs R&D work really shines. (Important for generative AI because it is a HUGE bottleneck for most models.)

A standout is the preview release of Hyperdisk, a block storage service optimised for AI inference/serving workloads that Google Cloud says accelerates model load times up to 12X compared to common alternatives, with read-only, multi-attach, and thin provisioning.

Hyperdisk lets uses spin up 2,500 instances to access the same volume and delivers up to 1.2 TiB/s of aggregate throughput per volume: Over 100X greater performance than Microsoft Azure Ultra SSD and Amazon EBS io2 BlockExpress in short its volumes are heavily optimised and managed network storage devices located independently from VMs, so users can detach or move Hyperdisk volumes to keep data, even after deleting VMs.

Hyperdisk performance is decoupled from size, so you can dynamically update the performance, resize your existing Hyperdisk volumes or add more Hyperdisk volumes to a VM to meet your performance and storage space requirements Google boasts, although there are some limitations

Other storage/caching updates:

Chrome Enterprise Premium is a turbocharged version ofChrome Enterprise with new....

Yes, we agree, this sounds rather good too.

More details and pricing in a standalone piece soon.

Follow this link:
Google Cloud NEXT 2024: The hottest news, in brief - The Stack

Read More..

Google Photos on Android seems primed to pick up a ‘recover storage’ option – Android Central

A new option hidden within the code for the Google Photos app teases a familiar space-saving function.

According to PunikaWeb, courtesy of AssembleDebug, the latest 6.78 version of Photos contains information regarding a coming "Recover Storage" option. The feature was discovered within the "Account Storage" section, under "Manage Storage." Upon tapping, the Android app showed an addition to the page that would let users "convert photos to Storage saver."

Google's description says the saver will "recover some storage" by reducing the quality of your previously cloud-saved items to save space. This method involves all of a user's photos and videos they've saved via the cloud.

A subsequent page states Photos will not touch the original quality of items stored in Gmail, Drive, or YouTube. Additionally, other items on a user's Pixel device may not be roped into this either.

The publication states Google's continued development of Recover Storage has brought in more information about photo/video compression. The company will seemingly warn users in-app that compressing their older items to a reduced quality "can't be reversed."

Users should also be prepared to wait a while as the app does its thing, which could take a few days.

Image 1 of 2

If this feature sounds familiar, it's because the web-based version of Photos already offers this space-saving option. The good thing is that compressing your older media won't affect your future uploads, as stated on its support page. So, if you're running out of space (again), you can always try to compress your files again.

Get the latest news from Android Central, your trusted companion in the world of Android

There's speculation that Google could roll out its Recover Storage option to Android users soon as its functionality seems nearly done. Moreover, it seems it will arrive for iOS devices in conjunction with Android.

Yesterday (Apr. 10), the company announced that a few powerful AI editing tools will soon arrive in Photos for free. Beginning May 15, all users can utilize Magic Eraser, Photo Unblur, Portrait Light, and a few more without a subscription. Eligible devices include those running Android 8 and above, Chromebook Plus devices, and iOS 15 and above.

King of the Androids

The Google Pixel 8 Pro arrived with a paradigm shift in tow. The device features loads of Google's AI software such as Gemini and other tools for editing up blemishes in our photos. Moreover, the Pixel 8 Pro delivers an immersive display for smooth scrolling, great haptics, and more.

Go here to see the original:
Google Photos on Android seems primed to pick up a 'recover storage' option - Android Central

Read More..

HYCU Wins Google Cloud Technology Partner of the Year Award for Backup and Disaster Recovery – GlobeNewswire

Boston, Massachusetts, April 09, 2024 (GLOBE NEWSWIRE) -- HYCU, Inc., a leader in data protection as a service and one of the fastest growing companies in the industry, today announced that it has received the 2024 Google Cloud Technology Partner of the Year for Backup and DR. HYCU is being recognized for their achievements in the Google Cloud ecosystem, helping joint customers do more with less by leveraging HYCUs R-Cloud platform that runs natively with Google Cloud to provide core data protection services including enterprise class automated backup and granular recovery across Google Cloud and other IaaS, DBaaS, PaaS, and SaaS services.

Google Clouds Partner Awards celebrate the transformative impact and value that partners have delivered for customers, said Kevin Ichhpurani, Corporate Vice President, Global Ecosystem and Channels at Google Cloud. Were proud to announce HYCU as a 2024 Google Partner Award winner and recognize their achievements enabling customer success from the past year.

HYCU provides backup and recovery for the broadest number of IaaS, DBaaS, PaaS, and SaaS services for Google Cloud currently. This support includes Google Workspace, BigQuery, CloudSQL, AlloyDB, Cloud Functions, Cloud Run, and AppEngine with enhanced capabilities for GKE. This support is in addition to Google Cloud services including Google Compute Engine, Google Cloud Storage, Google Cloud VMware Engine, and SAP on Google. With the HYCU R-Cloud Platform, HYCU can now help customers protect more Google Cloud services than any other provider in the industry. HYCU recently announced it has passed the 70 SaaS integration milestone threshold.

In a year when the threat landscape evolved to put companies at an even higher risk of data loss due to cyber threats, HYCU built an industry leading solution on Google Cloud to help customers extend purpose-built data protection to more of the Google Cloud services and SaaS applications that their businesses rely on, said Simon Taylor, Founder and CEO, HYCU, Inc. HYCUs innovation has also helped drive more growth for Google through double digit Google Marketplace GTV YoY. And, more HYCU customers recognized the value of HYCU R-Cloud to leverage the full power of R-Cloud for data protection across Google Cloud, on-prem, and SaaS, with all data backups stored securely using Google Cloud Storage. All of us at HYCU are both excited and proud to be named a Partner of the Year. It is yet another milestone as we look to solve the worlds modern data protection challenges.

Since the HYCU R-Cloud Platform was released and running on Google Cloud, customers have been able to benefit from R-Graph, the first visualization tool designed to help visualize a companys entire data estate including on-premises, Google Cloud and SaaS data. As the industrys first cloud-native platform for data protection, HYCU R-Cloud enables the build and release of enterprise-grade data protection for new data sources quickly and efficiently. This has enabled HYCU to extend data protection to dozens of new Google Cloud services and SaaS applications in the past twelve months, and leverage Google Cloud Storage to securely store backups.

For more information on HYCU R-Cloud, visit: https://www.hycu.com/r-cloud, follow us on X (formerly Twitter), connect with us on LinkedIn, Facebook, Instagram, and YouTube.

HYCU is showcasing its solution during Google Cloud Next from April 9th through the 11th in Las Vegas at booth #552. Attendees can learn more about HYCU's modern data protection approach firsthand.

# # #

About HYCU HYCU is the fastest-growing leader in the multi-cloud and SaaS data protection as a service industry. By bringing true SaaS-based data backup and recovery to on-premises, cloud-native and SaaS environments, the company provides unparalleled data protection, migration, disaster recovery, and ransomware protection to thousands of companies worldwide. As an award-winning and recognized visionary in the industry, HYCU solutions eliminate complexity, risk, and the high cost of legacy-based solutions, providing data protection simplicity to make the world safer. With an industry leading NPS score of 91, customers experience frictionless, cost-effective data protection, anywhere, everywhere. HYCU has raised $140M in VC funding to date and is based in Boston, Mass. Learn more at http://www.hycu.com.

The rest is here:
HYCU Wins Google Cloud Technology Partner of the Year Award for Backup and Disaster Recovery - GlobeNewswire

Read More..

Reporting on the FIDE Candidates – Chess.com

Hey everyone, I am here in Toronto getting ready for reporting on the FIDE Candidates Tournment. Can't wait to see all of the action live and in person! I wrote a SUB-STACK post about my arrival here in the beautiful Toronto! Had over there to check it out.

I will be here for 5 days to show our documentary, King Chess and write an article for American Chess magazine. I've previously reported on the 2016 World Championship, 2018 Candidates and 2018 World Championship. I am glad to be back in these post-COVID times, to be out and about brushing shoulders with chess greatness and all of the chess fans!

See original here:
Reporting on the FIDE Candidates - Chess.com

Read More..

Early OpenAI investor bets on alternative Sam Altmans approach to AI – Semafor

Each major breakthrough in AI has occurred by removing human involvement from part of the process. Before deep learning, machine learning involved humans labeling data meticulously so that algorithms could then understand the task, deciphering patterns and making predictions. But now, deep learning obviates the need for labeling. The software can, in essence, teach itself the task.

But humans have still been needed to build the architecture that told a computer how to learn. Large language models like ChatGPT came from a breakthrough in architecture known as the transformer. It was a major advance that allowed a deep learning method called neural networks to keep improving as they grew to unfathomably large sizes. Before the transformer, neural networks plateaued after reaching a certain size.

That is why Microsoft and others are spending tens of billions on AI infrastructure: It is a bet that bigger will continue to mean better.

The big downside of this kind of neural network, though, is that the transformer is imperfect. It tells the model to predict the next word in a sentence based on how groups of letters relate to one another. But there is nothing inherent in the model about the deeper meaning of those words.

It is this limitation that leads to what we call hallucinations; transformer-based models dont understand the concept of truth.

Morgan and many other AI researchers believe if there is an AI architecture that can learn concepts like truth and reasoning, it will be developed by the AI itself, and not humans. Now, humans no longer have to describe the architecture, he said. They just describe the constraints of what they want.

The trick, though, is getting the AI to take on a task that seems to exist beyond the comprehension of the human brain. The answer, he believes, has something to do with a mathematical concept known as category theory.

Increasingly popular in computer science and artificial intelligence, category theory can turn real-world concepts into mathematical formulas, which can be converted into a form of computer code. Symbolica employees, along with researchers from Google DeepMind, published a paper on the subject last month.

The idea is that category theory could be a method to instill constraints in a common language that is precise and understandable to humans and computers. Using category theory, Symbolica hopes its method will lead to AI with guardrails and rules baked in from the beginning. In contrast, foundation models based on transformer architecture require those factors to be added on later.

Morgan said it will be the key to creating AI models that are reliable and dont hallucinate. But like OpenAI, its aiming big in hopes that its new approach to machine learning will lead to the holy grail: Software that knows how to reason.

Symbolica, though, is not a direct competitor to foundation model companies like OpenAI and views its core product as bespoke AI architectures that can be used to build AI models for customers.

That is an entirely new concept in the field. For instance, Google did not view the transformer architecture as a product. In fact, it published the research so that anyone could use it.

Symbolica plans to build customized architectures for customers, which will then use them to train their own AI models. If they give us their constraints, we can just build them an architecture that meets those constraints and we know its going to work, Morgan said.

Morgan said the method will lead to interpretability, a buzzword in the AI industry these days that means the ability to understand why models act the way they do. The lack of interpretability is a major shortcoming of large language models, which are so vast that it is extremely challenging to understand how, exactly, they came up with their responses.

The limitation of Symbolicas models, though, is that they will be more narrowly focused on specific tasks compared to generalist models like GPT-4. But Morgan said thats a good thing.

It doesnt make any sense to train one model that tries to be good at everything when you could train many, tinier models for less money that are way better than GPT-4 could ever be at a specific task, he said.

(Correction: An earlier version of this article incorrectly said that some Symbolica employees had worked at Google DeepMind.)

See more here:
Early OpenAI investor bets on alternative Sam Altmans approach to AI - Semafor

Read More..

Sea-surface pCO2 maps for the Bay of Bengal based on advanced machine learning algorithms | Scientific Data – Nature.com

Friedlingstein, P. et al. Global carbon budget 2020. Earth System Science Data 12, 32693340 (2020).

Article ADS Google Scholar

Friedlingstein, P. et al. Global carbon budget 2021. Earth System Science Data Discussions 1191 (2021).

Friedlingstein, P. et al. Global carbon budget 2022. Earth System Science Data Discussions 2022, 1159 (2022).

Google Scholar

Chen, C.-T. et al. Airsea exchanges of CO2 in the worlds coastal seas. Biogeosciences 10, 65096544 (2013).

Article ADS CAS Google Scholar

Laruelle, G. G., Lauerwald, R., Pfeil, B. & Regnier, P. Regionalized global budget of the CO2 exchange at the air-water interface in continental shelf seas. Global biogeochemical cycles 28, 11991214 (2014).

Article ADS CAS Google Scholar

Laruelle, G. G. et al. Continental shelves as a variable but increasing global sink for atmospheric carbon dioxide. Nature communications 9, 454 (2018).

Article ADS PubMed PubMed Central Google Scholar

Dai, M. et al. Why are some marginal seas sources of atmospheric CO2? Geophysical Research Letters 40, 21542158 (2013).

Article ADS CAS Google Scholar

Zhai, W.-D. et al. Seasonal variations of the seaair CO2 fluxes in the largest tropical marginal sea (South China sea) based on multiple-year underway measurements. Biogeosciences 10, 77757791 (2013).

Article ADS Google Scholar

Li, Q., Guo, X., Zhai, W., Xu, Y. & Dai, M. Partial pressure of CO2 and air-sea CO2 fluxes in the South China sea: Synthesis of an 18-year dataset. Progress in Oceanography 182, 102272 (2020).

Article Google Scholar

Borges, A. V. Do we have enough pieces of the jigsaw to integrate CO2 fluxes in the coastal ocean? Estuaries 28, 327 (2005).

Article CAS Google Scholar

Anderson, T. R. Plankton functional type modelling: running before we can walk? Journal of Plankton Research 27, 10731081 (2005).

Article Google Scholar

Anderson, T. R. Progress in marine ecosystem modelling and the unreasonable effectiveness of mathematics. Journal of Marine Systems 81, 411 (2010).

Article ADS Google Scholar

Sarma, V., Krishna, M. & Srinivas, T. Sources of organic matter and tracing of nutrient pollution in the coastal Bay of Bengal. Marine Pollution Bulletin 159, 111477 (2020).

Article CAS PubMed Google Scholar

Sarma, V., Prasad, M. & Dalabehera, H. Influence of phytoplankton pigment composition and primary production on pCO2 levels in the Indian ocean. Journal of Earth System Science 130, 116 (2021).

Article Google Scholar

Joshi, A., Chowdhury, R. R., Warrior, H. & Kumar, V. Influence of the freshwater plume dynamics and the barrier layer thickness on the CO2 source and sink characteristics of the Bay of Bengal. Marine Chemistry 236, 104030 (2021).

Article CAS Google Scholar

Sarma, V. et al. East India coastal current controls the Dissolved Inorganic Carbon in the coastal Bay of Bengal. Marine Chemistry 205, 3747 (2018).

Article ADS CAS Google Scholar

Joshi, A., Roychowdhury, R., Kumar, V. & Warrior, H. Configuration and skill assessment of the coupled biogeochemical model for the carbonate system in the Bay of Bengal. Marine Chemistry 103871 (2020).

Joshi, A. & Warrior, H. Comprehending the role of different mechanisms and drivers affecting the sea-surface pCO2 and the air-sea CO2 fluxes in the Bay of Bengal: A modelling study. Marine Chemistry 243, 104120 (2022).

Article CAS Google Scholar

Chakraborty, K., Valsala, V., Bhattacharya, T. & Ghosh, J. Seasonal cycle of surface ocean pCO2 and pH in the northern Indian ocean and their controlling factors. Progress in Oceanography 198, 102683 (2021).

Article Google Scholar

Chakraborty, K., Valsala, V., Gupta, G. & Sarma, V. Dominant biological control over upwelling on pCO2 in sea east of sri lanka. Journal of Geophysical Research: Biogeosciences 123, 32503261 (2018).

Article ADS CAS Google Scholar

Sutton, A. J. et al. A high-frequency atmospheric and seawater pCO2 data set from 14 open-ocean sites using a moored autonomous system. Earth System Science Data 6, 353366 (2014).

Article ADS Google Scholar

Bakker, D. C. et al. Surface ocean CO2 atlas database version 2022 (SOCATv2022)(ncei accession 0253659). Earth System Science Data (2022).

Lauvset, S. K. et al. GLODAPv2. 2022: the latest version of the global interior ocean biogeochemical data product. Earth System Science Data Discussions 2022, 137 (2022).

Google Scholar

Takahashi, T. et al. Climatological distributions of pH, pCO2, total CO2, alkalinity, and CaCO3 saturation in the global surface ocean, and temporal changes at selected locations. Marine Chemistry 164, 95125 (2014).

Article CAS Google Scholar

Chau, T. T. T., Gehlen, M. & Chevallier, F. A seamless ensemble-based reconstruction of surface ocean pCO2 and airsea CO2 fluxes over the global coastal and open oceans. Biogeosciences 19, 10871109 (2022).

Article ADS CAS Google Scholar

Gregor, L., Lebehot, A. D., Kok, S. & Scheel Monteiro, P. M. A comparative assessment of the uncertainties of global surface ocean CO2 estimates using a machine-learning ensemble (csir-ml6 version 2019a)have we hit the wall? Geoscientific Model Development 12, 51135136 (2019).

Article ADS Google Scholar

Dixit, A., Lekshmi, K., Bharti, R. & Mahanta, C. Net seaair CO2 fluxes and modeled partial pressure of CO2 in open ocean of Bay of Bengal. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 12, 24622469 (2019).

Article ADS Google Scholar

Sridevi, B. & Sarma, V. Role of river discharge and warming on ocean acidification and pCO2 levels in the Bay of Bengal. Tellus B: Chemical and Physical Meteorology 73, 120 (2021).

Article CAS Google Scholar

Mohanty, S., Raman, M., Mitra, D. & Chauhan, P. Surface pCO2 variability in two contrasting basins of north Indian ocean using satellite data. Deep Sea Research Part I: Oceanographic Research Papers 179, 103665 (2022).

Article CAS Google Scholar

Joshi, A., Kumar, V. & Warrior, H. Modeling the sea-surface pCO2 of the central Bay of Bengal region using machine learning algorithms. Ocean Modelling 178, 102094 (2022).

Article Google Scholar

Sathyendranath, S. et al. An ocean-colour time series for use in climate studies: the experience of the ocean-colour climate change initiative (oc-cci). Sensors 19, 4285 (2019).

Article ADS CAS PubMed PubMed Central Google Scholar

Chevallier, F. et al. Inferring CO2 sources and sinks from satellite observations: Method and application to tovs data. Journal of Geophysical Research: Atmospheres 110 (2005).

Chevallier, F. et al. CO2 surface fluxes at grid point scale estimated from a global 21 year reanalysis of atmospheric measurements. Journal of Geophysical Research: Atmospheres 115 (2010).

Chevallier, F. On the parallelization of atmospheric inversions of CO2 surface fluxes within a variational framework. Geoscientific Model Development 6, 783790 (2013).

Article ADS Google Scholar

Pedregosa, F. et al. Scikit-learn: Machine learning in python. the Journal of machine Learning research 12, 28252830 (2011).

MathSciNet Google Scholar

Friedrich, T. & Oschlies, A. Neural network-based estimates of north Atlantic surface pCO2 from satellite data: A methodological study. Journal of Geophysical Research: Oceans 114 (2009).

Jo, Y.-H., Dai, M., Zhai, W., Yan, X.-H. & Shang, S. On the variations of sea surface pCO2 in the northern South China sea: A remote sensing based neural network approach. Journal of Geophysical Research: Oceans 117 (2012).

Moussa, H., Benallal, M., Goyet, C. & Lefvre, N. Satellite-derived CO2 fugacity in surface seawater of the tropical atlantic ocean using a feedforward neural network. International Journal of Remote Sensing 37, 580598 (2016).

Article ADS Google Scholar

Wang, Y. et al. Carbon sinks and variations of pCO2 in the southern ocean from 1998 to 2018 based on a deep learning approach. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 14, 34953503 (2021).

Article ADS Google Scholar

OMalley, T. et al. Keras tuner. Retrieved May 21, 2020 (2019).

Google Scholar

Agarap, A. F. Deep learning using rectified linear units (relu). arXiv preprint arXiv:1803.08375 (2018).

Kingma, D. P. & Ba, J. Adam: A method for stochastic optimization. Anon. International Conference on Learning Representations. SanDego: ICLR 7 (2015).

Chen, T. & Guestrin, C. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, 785794 (2016).

Akiba, T., Sano, S., Yanase, T., Ohta, T. & Koyama, M. Optuna: A next-generation hyperparameter optimization framework. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, 26232631 (2019).

Breiman, L. Random forests. Machine learning 45, 532 (2001).

Article Google Scholar

Lawrence, R. L., Wood, S. D. & Sheley, R. L. Mapping invasive plants using hyperspectral imagery and breiman cutler classifications (randomforest). Remote Sensing of Environment 100, 356362 (2006).

Article ADS Google Scholar

Akhil, V. P. et al. Bay of Bengal sea surface salinity variability using a decade of improved smos re-processing. Remote Sensing of Environment 248, 111964 (2020).

Article Google Scholar

Wanninkhof, R. Relationship between wind speed and gas exchange over the ocean. Journal of Geophysical Research: Oceans 97, 73737382 (1992).

Article Google Scholar

Hersbach, H. et al. The ERA5 global reanalysis. Quarterly Journal of the Royal Meteorological Society 146, 19992049 (2020).

Article ADS Google Scholar

Wanninkhof, R. Relationship between wind speed and gas exchange over the ocean revisited. Limnology and Oceanography: Methods 12, 351362 (2014).

Google Scholar

Weiss, R. Carbon dioxide in water and seawater: the solubility of a non-ideal gas. Marine chemistry 2, 203215 (1974).

Article CAS Google Scholar

Joshi, A., Ghoshal, K., Prasanna, Chakraborty, K. & Sarma, V. Sea-surface pCO2 maps for the Bay of Bengal based on machine learning algorithms. Zenodo https://doi.org/10.5281/zenodo.8375320 (2024).

Taylor, K. E. Summarizing multiple aspects of model performance in a single diagram. Journal of Geophysical Research: Atmospheres 106, 71837192 (2001).

Article Google Scholar

Willmott, C. J. On the validation of models. Physical geography 2, 184194 (1981).

Article Google Scholar

Sabine, C., Wanninkhof, R., Key, R., Goyet, C. & Millero, F. Seasonal CO2 fluxes in the tropical and subtropical Indian ocean. Marine Chemistry 72, 3353 (2000).

Article CAS Google Scholar

Bates, N. R., Pequignet, A. C. & Sabine, C. L. Ocean carbon cycling in the Indian ocean: 1. spatiotemporal variability of inorganic carbon and air-sea CO2 gas exchange. Global Biogeochemical Cycles 20 (2006).

Schott, F. A. & McCreary, J. P. Jr The monsoon circulation of the Indian ocean. Progress in Oceanography 51, 1123 (2001).

Read this article:
Sea-surface pCO2 maps for the Bay of Bengal based on advanced machine learning algorithms | Scientific Data - Nature.com

Read More..

Reducing Toxic AI Responses – Neuroscience News

Summary: Researchers developed a new machine learning technique to improve red-teaming, a process used to test AI models for safety by identifying prompts that trigger toxic responses. By employing a curiosity-driven exploration method, their approach encourages a red-team model to generate diverse and novel prompts that reveal potential weaknesses in AI systems.

This method has proven more effective than traditional techniques, producing a broader range of toxic responses and enhancing the robustness of AI safety measures. The research, set to be presented at the International Conference on Learning Representations, marks a significant step toward ensuring that AI behaviors align with desired outcomes in real-world applications.

Key Facts:

Source: MIT

A user could ask ChatGPT to write a computer program or summarize an article, and the AI chatbot would likely be able to generate useful code or write a cogent synopsis. However, someone could also ask for instructions to build a bomb, and the chatbot might be able to provide those, too.

To prevent this and other safety issues, companies that build large language models typically safeguard them using a process called red-teaming. Teams of human testers write prompts aimed at triggering unsafe or toxic text from the model being tested. These prompts are used to teach the chatbot to avoid such responses.

But this only works effectively if engineers know which toxic prompts to use. If human testers miss some prompts, which is likely given the number of possibilities, a chatbot regarded as safe might still be capable of generating unsafe answers.

Researchers from Improbable AI Lab at MIT and the MIT-IBM Watson AI Lab used machine learning to improve red-teaming. They developed a technique to train a red-team large language model to automatically generate diverse prompts that trigger a wider range of undesirable responses from the chatbot being tested.

They do this by teaching the red-team model to be curious when it writes prompts, and to focus on novel prompts that evoke toxic responses from the target model.

The technique outperformed human testers and other machine-learning approaches by generating more distinct prompts that elicited increasingly toxic responses. Not only does their method significantly improve the coverage of inputs being tested compared to other automated methods, but it can also draw out toxic responses from a chatbot that had safeguards built into it by human experts.

Right now, every large language model has to undergo a very lengthy period of red-teaming to ensure its safety. That is not going to be sustainable if we want to update these models in rapidly changing environments.

Our method provides a faster and more effective way to do this quality assurance, says Zhang-Wei Hong, an electrical engineering and computer science (EECS) graduate student in the Improbable AI lab and lead author of apaper on this red-teaming approach.

Hongs co-authors include EECS graduate students Idan Shenfield, Tsun-Hsuan Wang, and Yung-Sung Chuang; Aldo Pareja and Akash Srivastava, research scientists at the MIT-IBM Watson AI Lab; James Glass, senior research scientist and head of the Spoken Language Systems Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL); and senior author Pulkit Agrawal, director of Improbable AI Lab and an assistant professor in CSAIL. The research will be presented at the International Conference on Learning Representations.

Automated red-teaming

Large language models, like those that power AI chatbots, are often trained by showing them enormous amounts of text from billions of public websites. So, not only can they learn to generate toxic words or describe illegal activities, the models could also leak personal information they may have picked up.

The tedious and costly nature of human red-teaming, which is often ineffective at generating a wide enough variety of prompts to fully safeguard a model, has encouraged researchers to automate the process using machine learning.

Such techniques often train a red-team model using reinforcement learning. This trial-and-error process rewards the red-team model for generating prompts that trigger toxic responses from the chatbot being tested.

But due to the way reinforcement learning works, the red-team model will often keep generating a few similar prompts that are highly toxic to maximize its reward.

For their reinforcement learning approach, the MIT researchers utilized a technique called curiosity-driven exploration. The red-team model is incentivized to be curious about the consequences of each prompt it generates, so it will try prompts with different words, sentence patterns, or meanings.

If the red-team model has already seen a specific prompt, then reproducing it will not generate any curiosity in the red-team model, so it will be pushed to create new prompts, Hong says.

During its training process, the red-team model generates a prompt and interacts with the chatbot. The chatbot responds, and a safety classifier rates the toxicity of its response, rewarding the red-team model based on that rating.

Rewarding curiosity

The red-team models objective is to maximize its reward by eliciting an even more toxic response with a novel prompt. The researchers enable curiosity in the red-team model by modifying the reward signal in the reinforcement learning set up.

First, in addition to maximizing toxicity, they include an entropy bonus that encourages the red-team model to be more random as it explores different prompts. Second, to make the agent curious they include two novelty rewards.

One rewards the model based on the similarity of words in its prompts, and the other rewards the model based on semantic similarity. (Less similarity yields a higher reward.)

To prevent the red-team model from generating random, nonsensical text, which can trick the classifier into awarding a high toxicity score, the researchers also added a naturalistic language bonus to the training objective.

With these additions in place, the researchers compared the toxicity and diversity of responses their red-team model generated with other automated techniques. Their model outperformed the baselines on both metrics.

They also used their red-team model to test a chatbot that had been fine-tuned with human feedback so it would not give toxic replies. Their curiosity-driven approach was able to quickly produce 196 prompts that elicited toxic responses from this safe chatbot.

We are seeing a surge of models, which is only expected to rise. Imagine thousands of models or even more and companies/labs pushing model updates frequently. These models are going to be an integral part of our lives and its important that they are verified before released for public consumption.

Manual verification of models is simply not scalable, and our work is an attempt to reduce the human effort to ensure a safer and trustworthy AI future, says Agrawal.

In the future, the researchers want to enable the red-team model to generate prompts about a wider variety of topics. They also want to explore the use of a large language model as the toxicity classifier. In this way, a user could train the toxicity classifier using a company policy document, for instance, so a red-team model could test a chatbot for company policy violations.

If you are releasing a new AI model and are concerned about whether it will behave as expected, consider using curiosity-driven red-teaming, says Agrawal.

Funding: This research is funded, in part, by Hyundai Motor Company, Quanta Computer Inc., the MIT-IBM Watson AI Lab, an Amazon Web Services MLRA research grant, the U.S. Army Research Office, the U.S. Defense Advanced Research Projects Agency Machine Common Sense Program, the U.S. Office of Naval Research, the U.S. Air Force Research Laboratory, and the U.S. Air Force Artificial Intelligence Accelerator.

Author: Adam Zewe Source: MIT Contact: Adam Zewe MIT Image: The image is credited to Neuroscience News

Original Research: The findings will be presented at the International Conference on Learning Representations

More here:
Reducing Toxic AI Responses - Neuroscience News

Read More..

A Vision of the Future: Machine Learning in Packaging Inspection – Packaging Digest

As we navigate through the corridors of modern manufacturing, the influence of machine vision and machine learning on the packaging industry stands as a testament to technological evolution. This integration, though largely beneficial, introduces a spectrum of complexities, weaving a narrative that merits a closer examination.

In unpacking the layers of this technological marvel, we should not only tout its enhancements but also recognize its challenges and ethical considerations.

Machine vision, equipped with the power of machine learning algorithms, has ushered in a new era for packaging. This synergy has transcended traditional boundaries, offering precision, efficiency, and adaptability previously unattainable. With the ability to analyze visual data and learn from it, these systems have revolutionized quality control, ensuring that products meet the high standards consumers have come to expect.

Machine vision systems, with their tireless eyes, can inspect products at speeds and accuracies far beyond human capabilities.

The benefits are manifold. Machine vision systems, with their tireless eyes, can inspect products at speeds and accuracies far beyond human capabilities. They detect even the minutest defects, from misaligned labels to imperfect seals, ensuring that only flawless products reach the market. This not only enhances brand reputation but also significantly reduces waste, contributing to more sustainable manufacturing practices.

Moreover, machine learning algorithms enable these systems to improve over time. They learn from every product inspected, becoming more adept at identifying defects and adapting to new packaging designs without the need for extensive reprogramming. This adaptability is crucial in an era where product cycles are rapid and consumer demands are ever-evolving.

One of the most significant impacts of machine vision and learning in packaging is the leap in operational efficiency it enables. Automated inspection lines reduce downtime, allowing for continuous production that keeps pace with demand.

Furthermore, the integration of these technologies facilitates personalized packaging at scale. Machine vision systems can adjust to package products according to individual specifications, catering to the growing market for personalized goods, from custom-labeled beverages to bespoke cosmetic kits.

Yet, as with any technological advancement, the integration of machine vision and machine learning in packaging is not without its challenges.

The initial investment in sophisticated equipment and the ongoing need for skilled personnel to manage and interpret data can widen the technological divide, potentially pushing smaller players out of the competition.

The complexity of these systems necessitates a high level of expertise, posing a significant hurdle for smaller manufacturers. The initial investment in sophisticated equipment and the ongoing need for skilled personnel to manage and interpret data can widen the technological divide, potentially pushing smaller players out of the competition.

Data privacy and security emerge as paramount concerns. Machine learning algorithms thrive on data, raising questions about the ownership and protection of the data collected during the packaging process. As these systems become more integrated into manufacturing operations, ensuring the security of sensitive information against breaches becomes a critical issue that manufacturers must address.

Moreover, the reliance on machine vision and learning systems introduces the risk of over-automation. While these technologies can enhance efficiency, there is a fine line between leveraging them to support human workers and replacing them altogether. The potential for job displacement raises ethical questions about the responsibility of manufacturers to their workforce and the broader societal implications of widespread automation.

The path forward requires a careful balancing act. Manufacturers must embrace the benefits of machine vision and learning while remaining cognizant of the potential pitfalls.

Investing in training and development programs can help mitigate the risk of job displacement, ensuring that workers are equipped with the skills needed to thrive in a technologically advanced workplace.

manufacturers can adopt a phased approach to the integration of these technologies, allowing for gradual adaptation and minimizing disruption.

Transparency in data collection and processing, coupled with robust cybersecurity measures, can address privacy concerns, building trust among consumers and stakeholders. Moreover, manufacturers can adopt a phased approach to the integration of these technologies, allowing for gradual adaptation and minimizing disruption.

The impact of machine vision and machine learning on the packaging industry is undeniable, offering unparalleled enhancements in quality control, efficiency, and customization. Yet, as we chart this course of technological integration, we must navigate the complexities it introduces with foresight and responsibility.

By addressing the challenges head-on and adhering to ethical standards, the packaging industry can harness the full potential of these advancements, propelling itself towards a future that is not only more efficient and adaptable but also equitable and secure.

In this journey, the clear sight of progress must be guided by the wisdom to recognize its potential shadows, ensuring that the path we tread is illuminated by both innovation and integrity.

View post:
A Vision of the Future: Machine Learning in Packaging Inspection - Packaging Digest

Read More..

Integrating machine learning and gait analysis into orthopedic practice can lead to more effective care – News-Medical.Net

Investigators have applied artificial intelligence techniques to gait analyses and medical records data to provide insights about individuals with leg fractures and aspects of their recovery.

The study, which is published in the Journal of Orthopaedic Research, uncovered a significant association between the rates of hospital readmission after fracture surgery and the presence of underlying medical conditions. Correlations were also found between underlying medical conditions and orthopedic complications, although these links were not significant.

It was also apparent that gait analyses in the early postinjury phase offer valuable insights into the injury's impact on locomotion and recovery. For clinical professionals, these patterns were key to optimizing rehabilitation strategies.

Our findings demonstrate the profound impact that integrating machine learning and gait analysis into orthopedic practice can have, not only in improving the accuracy of post-injury complication predictions but also in tailoring rehabilitation strategies to individual patient needs. This approach represents a pivotal shift towards more personalized, predictive, and ultimately more effective orthopedic care."

Mostafa Rezapour, PhD, corresponding authorof Wake Forest University School of Medicine

Dr. Rezapour added that the study underscores the critical importance of adopting a holistic view that encompasses not just the mechanical aspects of injury recovery but also the broader spectrum of patient health. "This is a step forward in our quest to optimize rehabilitation strategies, reduce recovery times, and improve overall quality of life for patients with lower extremity fractures," he said.

Source:

Journal reference:

Rezapour, M.,et al.(2024) Employing machine learning to enhance fracture recovery insights through gait analysis. Journal of Orthopaedic Research. doi.org/10.1002/jor.25837.

Read this article:
Integrating machine learning and gait analysis into orthopedic practice can lead to more effective care - News-Medical.Net

Read More..