Page 1,708«..1020..1,7071,7081,7091,710..1,7201,730..»

Introducing NNUE Evaluation – Stockfish – Open Source Chess Engine

As of August 6, the efficiently updatable neural network (NNUE)evaluation haslandedin the Stockfish repo!

Both the NNUE and the classical evaluations are available, and can beused to assign a value to a position that is later used in alpha-beta(PVS) search to find the best move. The classical evaluationcomputes this value as a function of various chess concepts, handcraftedby experts, tested and tuned using fishtest. The NNUE evaluationcomputes this value with a neural network based on basic inputs. Thenetwork is optimized and trained on the evaluations of millions ofpositions at moderate search depth.

The NNUE evaluation was first introduced in shogi, and ported toStockfish afterward. It can be evaluated efficiently on CPUs, andexploits the fact that only parts of the neural network need to beupdated after a typical chess move. The nodchiprepository provides additionaltools to train and develop the NNUE networks.

The performance of the NNUE evaluation relative to the classicalevaluation depends somewhat on the hardware, and is expected to improvequickly, but is currently on > 80 Elo on fishtest:

Stockfish 12 is not expected to be released imminentlywe want sometime to let this major change bake for a bit. But you might still wantto try out NNUE! Three simple steps:

This patch is the result of contributions of various authors, fromvarious communities, including: nodchip, ynasu87, yaneurao (initial portand NNUE authors), domschl, FireFather, rqs, xXH4CKST3RXx, tttak,zz4032, joergoster, mstembera, nguyenpham, erbsenzaehler, dorzechowski,and vondele.

This new evaluation needed various changes to fishtest and thecorresponding infrastructure, for which tomtor, ppigazzini, noobpwnftw,daylen, and vondele are gratefully acknowledged.

The first networks have been provided by gekkehenker and sergiovieri,with the latter net being the current default.

Guidelines for testing new nets can be foundhere.

Integration has been discussed in various Github issues:

The pull requests:

This will be an exciting time for computer chess, looking forward toseeing the evolution of this approach.

View original post here:
Introducing NNUE Evaluation - Stockfish - Open Source Chess Engine

Read More..

Dogecoin Replaces Cardano as 6th Largest Cryptocurrency – CoinDesk

  1. Dogecoin Replaces Cardano as 6th Largest Cryptocurrency  CoinDesk
  2. Dogecoin Pulls Ahead Of Cardano As 8th Largest Cryptocurrency, Charles Hoskinson Makes This Prediction -  Benzinga
  3. As Elon Musk closes in on Twitter, his favorite cryptocurrency soars  Fortune
  4. Dogecoin surges on Elon Musk's Twitter deal  Reuters
  5. Dogecoin Becomes The 8th Largest Cryptocurrency, Overtaking Cardano  Business Blockchain HQ
  6. View Full Coverage on Google News

Go here to read the rest:
Dogecoin Replaces Cardano as 6th Largest Cryptocurrency - CoinDesk

Read More..

What is Cloud Storage? – Cloud Storage – AWS

Cloud storage has several use cases in application management, data management, and business continuity. Lets consider some examples below.

Traditional on-premises storage solutions can be inconsistent in their cost, performance, and scalability especially over time. Analytics demand large-scale, affordable, highly available, and secure storage pools that are commonly referred to as data lakes.

Data lakes built on object storage keep information in its native form and include rich metadata that allows selective extraction and use for analysis. Cloud-based data lakes can sit at the center of multiple kinds of data warehousing and processing, as well as big data and analytical engines, to help you accomplish your next project in less time and with more targeted relevance.

Backup and disaster recovery are critical for data protection and accessibility, but keeping up with increasing capacity requirements can be a constant challenge. Cloud storage brings low cost, high durability, and extreme scale to data backup and recovery solutions. Embedded data management policies can automatically migrate data to lower-cost storage based on frequency or timing settings, and archival vaults can be created to help comply with legal or regulatory requirements. These benefits allow for tremendous scale possibilities within industries such as financial services, healthcare and life sciences, and media and entertainment that produce high volumes of unstructured data with long-term retention needs.

Software test and development environments often require separate, independent, and duplicate storage environments to be built out, managed, and decommissioned. In addition to the time required, the up-front capital costs required can be extensive.

Many of the largest and most valuable companies in the world create applications in record time by using the flexibility, performance, and low cost of cloud storage. Even the simplest static websites can be improved at low cost. IT professionals and developers are turning to pay-as-you-go storage options that remove management and scale headaches.

The availability, durability, and low cloud storage costs can be very compelling. On the other hand, IT personnel working with storage, backup, networking, security, and compliance administrators might have concerns about the realities of transferring large amounts of data to the cloud. For some, getting data into the cloud can be a challenge. Hybrid, edge, and data movement services meet you where you are in the physical world to help ease your data transfer to the cloud.

Storing sensitive data in the cloud can raise concerns about regulation and compliance, especially if this data is currently stored in compliant storage systems. Cloud data compliance controls are designed to ensure that you can deploy and enforce comprehensive compliance controls on your data, helping you satisfy compliance requirements for virtually every regulatory agency around the globe. Often through a shared responsibility model, cloud vendors allow customers to manage risk effectively and efficiently in the IT environment, and provide assurance of effective risk management through compliance with established, widely recognized frameworks and programs.

Cloud-native applications use technologies like containerization and serverless to meet customer expectations in a fast-paced and flexible manner. These applications are typically made of small, loosely coupled, independent components called microservices that communicate internally by sharing data or state. Cloud storage services provide data management for such applications and provide solutions to ongoing data storage challenges in the cloud environment.

Enterprises today face significant challenges with exponential data growth. Machine learning (ML) and analytics give data more uses than ever before. Regulatory compliance requires long retention periods. Customers need to replace on-premises tape and disk archive infrastructure with solutions that provide enhanced data durability, immediate retrieval times, better security and compliance, and greater data accessibility for advanced analytics and business intelligence.

Many organizations want to take advantage of the benefits of cloud storage, but have applications running on premises that require low-latency access to their data, or need rapid data transfer to the cloud. Hybrid cloud storage architectures connect your on-premises applications and systems to cloud storage to help you reduce costs, minimize management burden, and innovate with your data.

Because block storage has high performance and is readily updatable, many organizations use it for transactional databases. With its limited metadata, block storage is able to deliver the ultra-low latency required for high-performance workloads and latency sensitive applications like databases.

Block storage allows developers to set up a robust, scalable, and highly efficient transactional database. As each block is a self-contained unit, the database performs optimally, even when the stored data grows.

With cloud storage, you can process, store, and analyze data close to your applications and then copy data to the cloud for further analysis. With cloud storage, you can store data efficiently and cost-effectively while supporting ML, artificial intelligence (AI), and advanced analytics to gain insights and innovate for your business.

Continue reading here:
What is Cloud Storage? - Cloud Storage - AWS

Read More..

Understand Firebase Security Rules for Cloud Storage | Firebase Storage

Traditionally, security has been one of the most complex parts of appdevelopment. In most applications, developers must build and run a server thathandles authentication (who a user is) and authorization (what a user can do).Authentication and authorization are hard to set up, harder to get right, andcritical to the success of your product.

Similar to how Firebase Authentication makes it easy for you to authenticate yourusers, Firebase Security Rules for Cloud Storage makes it easy for you to authorize usersand validate requests. Cloud Storage Security Rules manage the complexity for you byallowing you to specify path based permissions. In just a few lines of code, youcan write authorization rules that restrict Cloud Storage requests to acertain user or limit the size of an upload.

The Firebase Realtime Database has a similar feature, calledFirebase Realtime Database Rules

Knowing who your users are is an important part of building an application, andFirebase Authentication provides an easy to use, secure, client side only solutionto authentication. Firebase Security Rules for Cloud Storage ties in to Firebase Authenticationfor user based security. When a user is authenticated with Firebase Authentication,the request.auth variable in Cloud Storage Security Rules becomes an object thatcontains the user's unique ID (request.auth.uid) and all other userinformation in the token (request.auth.token). When the user is notauthenticated, request.auth is null. This allows you to securely controldata access on a per-user basis. You can learn more in theAuthentication section.

Identifying your user is only part of security. Once you know who they are, youneed a way to control their access to files in Cloud Storage.

Cloud Storage lets you specify per file and per path authorizationrules that live on our servers and determine access to the files in your app.For example, the default Cloud Storage Security Rules require Firebase Authentication inorder to perform any read or write operations on all files:

You can edit these rules by selecting a Firebase app in the Firebase consoleand viewing the Rules tab of the Storage section.

Firebase Security Rules for Cloud Storage can also be used for data validation, includingvalidating file name and path as well as file metadata properties such ascontentType and size.

Read more:
Understand Firebase Security Rules for Cloud Storage | Firebase Storage

Read More..

Save up to 85% off a Polarbackup cloud storage plan, starting at $99 (Reg. $699) – 9to5Toys

  1. Save up to 85% off a Polarbackup cloud storage plan, starting at $99 (Reg. $699)  9to5Toys
  2. Lock in 1TB of Cloud Storage for a Prime DayLike Deal  Entrepreneur
  3. This Polarbackup Cloud Storage Lifetime Subscription Is 85% Off, And Will Save Everything Forever  The Inventory
  4. This Overstock deal on cloud storage is too good to pass up  TechRepublic
  5. Save Like It's Prime Day With 94% Off 1TB Secure Cloud Storage  PCMag
  6. View Full Coverage on Google News

View post:
Save up to 85% off a Polarbackup cloud storage plan, starting at $99 (Reg. $699) - 9to5Toys

Read More..

The Global Data Center As A Service Market size is expected to reach $269.7 billion by 2028, rising at a market growth of 27.2% CAGR during the…

The Global Data Center As A Service Market size is expected to reach $269.7 billion by 2028, rising at a market growth of 27.2% CAGR during the forecast period  GlobeNewswire

Read more here:
The Global Data Center As A Service Market size is expected to reach $269.7 billion by 2028, rising at a market growth of 27.2% CAGR during the...

Read More..

HEALTHSTREAM INC Management’s Discussion and Analysis of Financial Condition and Results of Operations (form 10-Q) – Marketscreener.com

HEALTHSTREAM INC Management's Discussion and Analysis of Financial Condition and Results of Operations (form 10-Q)  Marketscreener.com

See the article here:
HEALTHSTREAM INC Management's Discussion and Analysis of Financial Condition and Results of Operations (form 10-Q) - Marketscreener.com

Read More..

SPS COMMERCE INC Management’s Discussion and Analysis of Financial Condition and Results of Operations (form 10-Q) – Marketscreener.com

SPS COMMERCE INC Management's Discussion and Analysis of Financial Condition and Results of Operations (form 10-Q)  Marketscreener.com

Here is the original post:
SPS COMMERCE INC Management's Discussion and Analysis of Financial Condition and Results of Operations (form 10-Q) - Marketscreener.com

Read More..

Hubble Ultra-Deep Field – Wikipedia

Deep-field space image

Hubble Deep UV (HDUV) Legacy Survey; 15k galaxies, released August 16, 2018

ABYSS WFC3/IR Hubble Ultra Deep Field; released January 24, 2019

The Hubble Ultra-Deep Field (HUDF) is a deep-field image of a small region of space in the constellation Fornax, containing an estimated 10,000 galaxies. The original data for the image was collected by the Hubble Space Telescope from September 2003 to January 2004. It includes light from galaxies that existed about 13 billion years ago, some 400 to 800 million years after the Big Bang.

The HUDF image was taken in a section of the sky with a low density of bright stars in the near-field, allowing much better viewing of dimmer, more distant objects. Located southwest of Orion in the southern-hemisphere constellation Fornax, the rectangular image is 2.4 arcminutes to an edge,[1] or 3.4 arcminutes diagonally. This is about one-tenth of the angular diameter of a full moon viewed from Earth (less than 34 arcminutes),[2] smaller than a 1mm2 piece of paper held 1m away, and equal to roughly one twenty-six-millionth of the total area of the sky. The image is oriented so that the upper left corner points toward north (46.4) on the celestial sphere.

In August and September 2009, the HUDF field was observed at longer wavelengths (1.0 to 1.6m) using the infrared channel of the recently fitted Wide Field Camera 3 (WFC3). This additional data enabled astronomers to identify a new list of potentially very distant galaxies.[3][4]

On September 25, 2012, NASA released a new version of the Ultra-Deep Field dubbed the eXtreme Deep Field (XDF). The XDF reveals galaxies from 13.2 billion years ago, including one thought to have formed only 450 million years after the Big Bang.[5]

On June 3, 2014, NASA released the Hubble Ultra Deep Field 2014 image, the first HUDF image to use the full range of ultraviolet to near-infrared light.[6] A composite of separate exposures taken in 2002 to 2012 with Hubble's Advanced Camera for Surveys and Wide Field Camera 3, it shows some 10,000 galaxies.[7]

On January 23, 2019, the Instituto de Astrofsica de Canarias released an even deeper version[8] of the infrared images of the Hubble Ultra Deep Field obtained with the WFC3 instrument, named the ABYSS Hubble Ultra Deep Field. The new images improve the previous reduction of the WFC3/IR images, including careful sky background subtraction around the largest galaxies on the field of view. After this update, some galaxies were found to be almost twice as big as previously measured.[9][10]

In the years since the original Hubble Deep Field, the Hubble Deep Field South and the GOODS sample were analyzed, providing increased statistics at the high redshifts probed by the HDF. When the Advanced Camera for Surveys (ACS) detector was installed on the HST, it was realized that an ultra-deep field could observe galaxy formation out to even higher redshifts than had currently been observed, as well as providing more information about galaxy formation at intermediate redshifts (z~2).[11] A workshop on how to best carry out surveys with the ACS was held at STScI in late 2002. At the workshop Massimo Stiavelli advocated an Ultra Deep Field as a way to study the objects responsible for the reionization of the Universe.[12] Following the workshop, the STScI Director Steven Beckwith decided to devote 400 orbits of Director's Discretionary time to the UDF and appointed Stiavelli as the lead of the Home Team implementing the observations.

Unlike the Deep Fields, the HUDF does not lie in Hubble's Continuous Viewing Zone (CVZ). The earlier observations, using the Wide Field and Planetary Camera 2 (WFPC2) camera, were able to take advantage of the increased observing time on these zones by using wavelengths with higher noise to observe at times when earthshine contaminated the observations; however, ACS does not observe at these wavelengths, so the advantage was reduced.[11]

As with the earlier fields, this one was required to contain very little emission from our galaxy, with little Zodiacal dust. The field was also required to be in a range of declinations such that it could be observed both by southern hemisphere instruments, such as the Atacama Large Millimeter Array, and northern hemisphere ones, such as those located on Hawaii. It was ultimately decided to observe a section of the Chandra Deep Field South, due to existing deep X-ray observations from Chandra X-ray Observatory and two interesting objects already observed in the GOODS sample at the same location: a redshift 5.8 galaxy and a supernova. The coordinates of the field are right ascension 3h 32m 39.0s, declination 274729.1 (J2000). The field is 200 arcseconds to a side, with a total area of 11 square arcminutes,[11] and lies in the constellation of Fornax.[13]

Four filters were used on the ACS, centered on 435, 606, 775 and 850nm, with exposure times set to give equal sensitivity in all filters. These wavelength ranges match those used by the GOODS sample, allowing direct comparison between the two. As with the Deep Fields, the HUDF used Directors Discretionary Time. In order to get the best resolution possible, the observations were dithered by pointing the telescope at slightly different positions for each exposurea process trialled with the Hubble Deep Fieldso that the final image has a higher resolution than the pixels on their own would normally allow.[11]

The observations were done in two sessions, from September 23 to October 28, 2003, and December 4, 2003, to January 15, 2004. The total exposure time is just under 1 million seconds, from 400 orbits, with a typical exposure time of 1200 seconds.[11] In total, 800 ACS exposures were taken over the course of 11.3 days, two per orbit; NICMOS observed for 4.5 days. All the individual ACS exposures were processed and combined by Anton Koekemoer into a set of scientifically useful images, each with a total exposure time ranging from 134,900 seconds to 347,100 seconds. To observe the whole sky to the same sensitivity, the HST would need to observe continuously for a million years.[13]

The sensitivity of the ACS limits its capability of detecting galaxies at high redshift to about 6. The deep NICMOS fields obtained in parallel to the ACS images could in principle be used to detect galaxies at redshift 7 or higher but they were lacking visible band images of similar depth. These are necessary to identify high redshift objects as they should not be seen in the visible bands. In order to obtain deep visible exposures on top of the NICMOS parallel fields a follow-up program, HUDF05, was approved and granted 204 orbits to observe the two parallel fields (GO-10632).[14] The orientation of the HST was chosen so that further NICMOS parallel images would fall on top of the main UDF field.

After the installation of WFC3 on Hubble in 2009, the HUDF09 programme (GO-11563) devoted 192 orbits to observations of three fields, including HUDF, using the newly available F105W, F125W and F160W infra-red filters (which correspond to the Y, J and H bands):[4][15]

Hidden to visible light, another object above the galaxy

The HUDF is the deepest image of the universe ever taken and has been used to search for galaxies that existed between 400 and 800 million years after the Big Bang (redshifts between 7 and 12).[13] Several galaxies in the HUDF are candidates, based on photometric redshifts, to be amongst the most distant astronomical objects. The red dwarf UDF 2457 at distance of 59,000 light-years is the furthest star resolved by the HUDF.[16] The star near the center of the field is USNO-A2.0 0600-01400432 with apparent magnitude of 18.95.[17][bettersourceneeded]

The field imaged by the ACS contains over 10,000 objects, the majority of which are galaxies, many at redshifts greater than 3, and some that probably have redshifts between 6 and 7.[11] The NICMOS measurements may have discovered galaxies at redshifts up to 12.[13]

The HUDF has revealed high rates of star formation during the very early stages of galaxy formation, within a billion years after the Big Bang.[11] It has also enabled improved characterization of the distribution of galaxies, their numbers, sizes and luminosities at different epochs, aiding investigation into the evolution of galaxies.[11] Galaxies at high redshifts have been confirmed to be smaller and less symmetrical than ones at lower redshifts, illuminating the rapid evolution of galaxies in the first couple of billion years after the Big Bang.[11]

The Hubble eXtreme Deep Field (HXDF), released on September 25, 2012, is an image of a portion of space in the center of the Hubble Ultra Deep Field image. Representing a total of two million seconds (about 23 days) of exposure time collected over 10 years, the image covers an area of 2.3 arcminutes by 2 arcminutes,[18] or about 80% of the area of the HUDF. This represents approximately one thirty-two millionth of the sky.

The HXDF contains about 5,500 galaxies, the oldest of which are seen as they were 13.2 billion years ago. The faintest galaxies are one ten-billionth the brightness of what the human eye can see. The red galaxies in the image are the remnants of galaxies after major collisions during their elderly years. Many of the smaller galaxies in the image are very young galaxies that eventually developed into major galaxies, similar to the Milky Way and other galaxies in our galactic neighborhood.[5]

XDF size compared with the size of the Moon

HXDF image shows mature galaxies in the foreground plane, nearly mature galaxies from 5 to 9 billion years ago, and protogalaxies beyond 9 billion years.

Video (02:42) about how the Hubble eXtreme Deep Field image was made.

Continued here:
Hubble Ultra-Deep Field - Wikipedia

Read More..