Page 2,930«..1020..2,9292,9302,9312,932..2,9402,950..»

If You Bought $1,000 Worth of Bitcoin a Year Ago, Here’s How Much You’d Have Today – The Motley Fool

Bitcoin has beaten the stock market, but you might be shocked by how much.

It's been a wild ride for Bitcoin throughout its 11-year history, and that's been especially true over the past year. Not only did the COVID-19 pandemic drive Bitcoin's price lower initially, but it also seems to have helped accelerate investor interest in the leading cryptocurrency.

Here's a look at how Bitcoin has performed for investors over the past year and what has driven its performance.

I won't keep you in suspense. Bitcoin has increased in value by 612% over the past year, as of this writing. This means that a $1,000 investment in Bitcoin made one year ago would be worth just over $7,100 now.

During the same period, the S&P 500 index, which is generally considered to be the best gauge of overall stock market performance, has delivered a 50% total return. Several stocks have doubled and tripled over the past year as the market rewarded companies that benefited from the stay-at-home economy. But there are very few stocks that have delivered returns in the same ballpark as Bitcoin. So, it's fair to say that Bitcoin has been a big success as an investment over the past year for buy-and-hold investors.

The Ascent's picks for the best online stock brokers

Find the best stock broker for you among these top picks. Whether you're looking for a special sign-up offer, outstanding customer support, $0 commissions, intuitive mobile apps, or more, you'll find a stock broker to fit your trading needs.

Obviously, we can't go through every positive Bitcoin news item that has happened over the past year. But there have been three big themes that seem to have driven Bitcoin higher.

Here's the billion-dollar question. If Bitcoin ultimately gains mainstream adoption as a currency or ends up becoming a mainstream store of value (digital gold), there's a solid case to be made that Bitcoin could ultimately rise to $500,000 or even more. On the other hand, if the mainstream-use case doesn't pan out, or if investor interest starts to fade, it could go the other way just as easily.

The bottom line is that no investment that can deliver 7x returns in a year is without significant volatility and risk. If you're looking to buy Bitcoin or other cryptocurrencies, make sure you know what you're getting into.

Read this article:
If You Bought $1,000 Worth of Bitcoin a Year Ago, Here's How Much You'd Have Today - The Motley Fool

Read More..

Bitcoins upcoming Taproot upgrade and why it matters for the network – Cointelegraph

While a vast majority of crypto enthusiasts around the globe seem to be gushing about Ether (ETH) at the moment and how its upcoming London hard fork stands to push the premier altcoins value even higher, reports have recently surfaced that suggest Bitcoins much-awaited Taproot upgrade will go live also sometime by the end of this year.

In this regard, many Bitcoin (BTC) mining pools seem to already be signaling their support for the activation and as per data available on Taproot.watch, a website designed by core Bitcoin developer Hampus Sjberg, Taproot signaling currently accounts for about 56% of BTCs total hashing power.

It should be mentioned that two of the largest Bitcoin mining pools by hash rate AntPool and F2Pool have been major proponents for this upgrade from the very beginning. Even other relatively large mining operators, such as Foundry USA and Slush Pool, have also expressed their support for the activation.

In its most basic sense, Taproot can be thought of as the latest step in Bitcoins evolutionary path because the upgrade seeks not only to enhance the overall usability of the network by making transactions cheaper, faster and easier to deploy but also eventually allow for the deployment of smart contracts.

Furthermore, Taproot also proposes significant privacy promises i.e., it seeks to make all transactions look the same to everyone except the transacting parties. This potential camouflage-based framework seems as though it has been inspired by security-centric crypto offerings available in the market today, thus potentially moving Bitcoin closer to some privacy-focused coins, at least from a design standpoint.

On the subject, Antoni Trenchev, co-founder and managing partner at crypto lending platform Nexo, told Cointelegraph that the proposed Taproot update is proof of Bitcoins decentralized nature and that the network is always looking to improve and grow. He also believes that the upgrade serves as a reminder to investors that, unlike gold, Bitcoin is a dynamic store of value in every sense, adding:

Joel Edgerton, chief operating officer of cryptocurrency exchange bitFlyer USA, told Cointelegraph that even though most in the crypto community seem to be focused on Bitcoins price action at the moment, what they are overlooking is the fact that BTCs underlying technology is what actually gives it its value, and Taproot is an important development, for several reasons:

In Edgertons view, Taproot demonstrates the Bitcoin communitys maturity and how everyone seems to have learned valuable lessons from the Bitcoin upgrade wars of 2017 i.e., it is of utmost importance to plan and implement upgrades via the use of a decentralized, community-based vote.

Back in 2017, Bitcoin underwent a hard-fork, resulting in the creation of a new cryptocurrency called Bitcoin Cash (BCH). Although the process was quite straightforward, the period leading up to the hard fork was full of strife, with many core community members clashing with one another.

On paper, Taproot is an elegant engineering solution that has been devised using proven cryptographic foundations that can help provide several evolutionary improvements to the Bitcoin protocol.

Lior Yaffe, CEO of blockchain software company firm Jelurida, pointed out to Cointelegraph that by combining Schnorr signatures and Merkelized Abstract Syntax Trees, or MAST, Taproot is able to convert the representation of complex Bitcoin transactions, such as multi-signature transactions and transactions used to set up a Lightning channel, to look just like a regular Bitcoin transaction when submitted on-chain.

In cryptography, a Schnorr signature is easily provably fair and is functionally superior to conventional logarithmic signatures, which are routinely faced with intractability-related issues. Similarly, MAST are unique digital offerings that allow for the deployment of various user-selected conditions that must be fulfilled in order for an encumbered number of Bitcoins to be spent.

Overall, this reduces storage and can indirectly lower fees for these transaction types. Also, in the long run, when its usage becomes widespread, Taproot may be able to significantly improve privacy for Lightning and multi-signature users. From an ecosystem perspective, I view Taproot as an all-in attempt by the Bitcoin devs and community to finally make the Lighting network a mainstream payment platform, Yaffe added.

Another aspect of the proposed upgrade worth considering is whether or not Taproot will have any major effect on Bitcoins future price action. In this regard, Edgerton does not see the activation having any sort of short-term impact on Bitcoins value. He does believe, however, that the under the hood changes that will come as a result of this update will make the Bitcoin network way more functional and competitive.

Yaffe believes that in the long term, improving Lightning network adoption i.e., by reducing transaction fees and settlement times will keep Bitcoin and its ecosystem as a relevant internet era payment method, adding further:

Lastly, Siddharth Menon, co-founder and chief operating officer of cryptocurrency exchange WazirX, told Cointelegraph that since 2010, the latest Taproot upgrade has been highly anticipated and stands to have a positive impact on the currency. Slowly but steadily, this network gets better every day, he added.

Per Bitcoins community consensus, the aforementioned Taproot activation will only be given the green light if 90% of all mined blocks include an activation signal within a difficulty adjustment window (2,016 blocks). More specifically, the consensus agreement needs to take place during one of the difficulty epochs between now and Aug. 11 so that the network upgrade can go ahead as planned in November.

As of May 7, a total of 327 signaling blocks have emerged in the current window, while miners responsible for 610 blocks have chosen not to include a signal bit. The no votes issued by these miners currently account for 30% of the 2,016 blocks in the current difficulty window. Some pools that have so far voted against the activation include big names such as Poolin, Binance Pool, BTC.com, viaBTC and HuobiBTC.

See the original post here:
Bitcoins upcoming Taproot upgrade and why it matters for the network - Cointelegraph

Read More..

AI in MedTech: Risks and Opportunities of Innovative Technologies in Medical Applications – MedTech Intelligence

An increasing number of medical devices incorporate artificial intelligence (AI) capabilities to support therapeutic and diagnostic applications. In spite of the risks connected with this innovative technology, the applicable regulatory framework does not specify any requirements for this class of medical devices. To make matters even more complicated for manufacturers, there are no standards, guidance documents or common specifications for medical devices on how to demonstrate conformity with the essential requirements.

The term artificial intelligence (AI) describes the capability of algorithms to take over tasks and decisions by mimicking human intelligence.1 Many experts believe that machine learning, a subset of artificial intelligence, will play a significant role in the medtech sector.2,3 Machine learning is the term used to describe algorithms capable of learning directly from a large volume of training data. The algorithm builds a model based on training data and applies the experience, it has gained from the training to make predictions and decisions on new, unknown data. Artificial neural networks are a subset of machine learning methods, which have evolved from the idea of simulating the human brain.22 Neural networks are information-processing systems used for machine learning and comprise multiple layers of neurons. Between the input layer, which receives information, and the output layer, there are numerous hidden layers of neurons. In simple terms, neural networks comprise neurons also known as nodes which receive external information or information from other connected nodes, modify this information, and pass it on, either to the next neuron layer or to the output layer as the final result.5 Deep learning is a variation of artificial neural networks, which consist of multiple hidden neural network layers between the input and output layers. The inner layers are designed to extract higher-level features from the raw external data.

The role of artificial intelligence and machine learning in the health sector was already the topic of debate well before the coronavirus pandemic.6 As shown in an excerpt from PubMed several approaches for AI in medical devices have already been implemented in the past (see Figure 1). However, the number of publications on artificial intelligence and medical devices has grown exponentially since roughly 2005.

Artificial intelligence in the medtech sector is at the beginning of a growth phase. However, expectations for this technology are already high, and consequently prospects for the digital future of the medical sector are auspicious. In the future, artificial intelligence may be able to support health professionals in critical tasks, controlling and automating complex processes. This will enable diagnosis, therapy and care to be optimally aligned to patients individual needs, thereby increasing treatment efficiency, which in turn will ensure an effective and affordable healthcare sector in the future.4

However, some AI advocates tend to overlook some of the obstacles and risks encountered when artificial intelligence is implemented in clinical practice. This is particularly true for the upcoming regulation of this innovative technology. The risks of incorporating artificial intelligence in medical devices include faulty or manipulated training data, attacks on AI such as adversarial attacks, violation of privacy and lack of trust in technology. In spite of these technology-related risks, the applicable standards and regulatory frameworks do not include any specific requirements for the use of artificial intelligence in medical devices. After years of negotiations in the European Parliament, Regulation (EU) 2017/745 on medical devices and Regulation (EU) 2017/746 on in-vitro diagnostic medical devices entered into force on May 25, 2017. In contrast to Directives, EU Regulations enter directly into force in the EU Member States and do not have to be transferred into national law. The new regulations impose strict demands on medical device manufacturers and the Notified Bodies, which manufacturers must involve in the certification process of medical devices and in-vitro diagnostic medical devices (excluding class I medical devices and nonsterile class A in-vitro diagnostic medical devices, for which the manufacturers self-declaration will be sufficient).

Annex I to both the EU Regulation on medical devices (MDR) and the EU Regulation on in vitro diagnostic medical devices (IVDR) define general safety and performance requirements for medical devices and in-vitro diagnostics. However, these general requirements do not address the specific requirements related to artificial intelligence. To make matters even more complicated for manufacturers, there are no standards, guidance documents or common specifications on how to demonstrate conformity with the general requirements. To place a medical device on the European market, manufacturers must meet various criteria, including compliance with the essential requirements and completion of the appropriate conformity assessment procedure. By complying with the requirements, manufacturers ensure that their medical devices fulfill the high levels of safety and health protection required by the respective regulations.

To ensure the safety and performance of artificial intelligence in medical devices and in-vitro diagnostics, certain minimum requirements must be fulfilled. However, the above regulations define only general requirements for software. According to the general safety and performance requirements, software must be developed and manufactured in keeping with the state of the art. Factors to be taken into account include the software lifecycle process and risk management. Beyond the above, repeatability, reliability and performance in line with the intended use of the medical device must be ensured. This implicitly requires artificial intelligence to be repeatable, performant, reliable and predictable. However, this is only possible with a verified and validated model. Due to the absence of relevant regulatory requirements and standards, manufacturers and Notified Bodies are determining the state of the art for developing and testing artificial intelligence in medical devices, respectively. During the development, assessment and testing of AI, fundamental differences between artificial intelligence (particularly machine learning) and conventional software algorithms become apparent.

Towards the end of 2019, and thus just weeks before the World Health Organizations (WHO) warning of an epidemic in China, a Canadian company (BlueDot) specializing in AI-based monitoring of the spread of infectious diseases alerted its customers to the same risk. To achieve this the companys AI combed through news reports and databases of animal and plant diseases. By accessing global flight ticketing data, the AI system correctly forecast the spread of the virus in the days after it emerged. This example shows the high level of performance that can already be achieved with artificial intelligence today.7 However, it also reveals one of the fundamental problems encountered with artificial intelligence: Despite the distribution of information of the outbreak to various health organizations in different countries, international responses were few. One reason for this lack of response to the AI-based warning is the lack of trust in technology that we do not understand, which plays a particularly significant role in medical applications.

In clinical applications, artificial intelligence is predominantly used for diagnostic purposes. Analysis of medical images is the area where the development of AI models is most advanced. Artificial intelligence is successfully used in radiology, oncology, ophthalmology, dermatology and other medical disciplines.2 The advantages of using artificial intelligence in medical applications include the speed of data analysis and the capability of identifying patterns invisible to the human eye.

Take the diagnosis of osteoarthritis, for example. Although medical imaging enables healthcare professionals to identify osteoarthritis, this is generally at a late stage after the disease has already caused some cartilage breakdown. Using an artificial-intelligence system, a research team led by Dr. Shinjini Kundu analyzed magnetic resonance tomography (MRT) images. The team was able to predict osteoarthritis three years before the first symptoms manifested themselves.8 However, the team members were unable to explain how the AI system arrived at its diagnosis. In other words, the system was not explainable. The question now is whether patients will undergo treatment such as surgery, based on a diagnosis made by an AI system, which no doctor can either explain or confirm.

Further investigations revealed that the AI system identified diffusion of water into cartilage. It detected a symptom invisible to the human eye and, even more important, a pattern that had previously been unknown to science. This example again underlines the importance of trust in the decision of artificial intelligence, particularly in the medtech sector. Justification of decisions is one of the cornerstones of a doctor-patient (or AI-patient) relationship based on mutual trust. However, to do so the AI system must be explainable, understandable and transparent. Patients, doctors and other users will only trust in AI systems if their decisions can be explained and understood.

Many medical device manufacturers wonder why assessment and development of artificial intelligence must follow a different approach to that of conventional software. The reason is based on the principles of how artificial intelligence is developed and how it performs. Conventional software algorithms take an input variable X, process it using a defined algorithm and supply the result Y as the output variable (if X, then Y). The algorithm is programmed, and its correct function can be verified and validated. The requirements for software development, validation and verification are described in the two standards IEC 62304 and IEC 82304-1. However, there are fundamental differences between conventional software and artificial intelligence implementing a machine learning algorithm. Machine learning is based on using data to train a model without explicitly programming the data flow line by line. As described above, machine learning is trained using an automated appraisal of existing information (training data). Given this, both the development and conformity assessment of artificial intelligence necessitate different standards. The following sections provide a brief overview of the typical pitfalls.

A major disadvantage of artificial intelligence, in particular machine learning based on neural networks, is the complexity of the algorithms. This makes them highly non-transparent, hence their designation of black-box AI (see Figure 2). The complex nature of AI algorithms not only concerns their mathematical description but alsoin the case of deep-learning algorithmstheir high level of dimensionality and abstraction. For these classes of AI, the extent to which input information contributes to a specific decision is mostly impossible to determine. This is why AI is often referred to as black box AI. Can we trust the prediction of the AI system in such a case and, in a worst-case scenario, can we identify a failure of the system or a misdiagnosis?

A world-famous example of the result of a black-box AI was the match between AlphaGo, the artificial intelligence system made by DeepMind (Google) and the Go world champion, Lee Sedol. In the match, which was watched by an audience of 60 million including experts, move 37 showed the significance of these particular artificial intelligence characteristics. The experts described the move as a mistake, predicting that AlphaGo would lose the match since in their opinion the move made no sense at all. In fact, they went even further and said, Its not a human move. Ive never seen a human play this move9.

None of them understood the level of creativity behind AlphaGos move, which proved to be critical for winning the match. While understanding the decision made by the artificial intelligence system would certainly not change the outcome of the match, it still shows the significance of the explainability and transparency of artificial intelligence, particularly in the medical field. AlphaGo could also have been wrong!

One example of AI with an intended medical use was the application of artificial intelligence for determining a patients risk of pneumonia. This example shows the risk of black-box AI in the MedTech sector. The system in question surprisingly identified the high-risk patients as non-significant.10 Rich Caruana, one of the leading AI experts at Microsoft, who was also one of the developers of the system, advised against the use of the artificial intelligence he had developed: I said no. I said we dont understand what it does inside. I said I was afraid.11

In this context, it is important to note that open or explainable artificial intelligence, also referred to as white box, is by no means inferior to black-box AI. While there have been no standard methods for opening the black box, there are promising approaches for ensuring the plausibility of the predictions made by AI models. Some approaches try to achieve explainability based on individual predictions on input data. Others, by contrast, try to limit the range of input pixels that impact the decisions of artificial intelligence.12

Medical devices and their manufacturers must comply with further regulatory requirements in addition to the Medical Device Regulation (MDR) and the In-vitro Diagnostics Regulation (IVDR). The EUs General Data Protection Regulation (GDPR), for instance, is of particular relevance for the explainability of artificial intelligence. It describes the rules that apply to the processing of personal data and is aimed at ensuring their protection. Article 110 of the Medical Device Regulation (MDR) explicitly requires measures to be taken to protect personal data, referencing the predecessor of the General Data Protection Regulation.

AI systems which influence decisions that might concern an individual person must comply with the requirements of Articles 13, 22 and 35 of the GDPR.

Where personal data are collected, the controller shall provide.the following information: the existence of automated decision-making and at least in those cases, meaningful information of the logic involved13

In simple terms this means, that patients who are affected by automated decision-making must be able to understand this decision and have the possibility to take legal action against it. However, this is precisely the type of understanding which is not possible in the case of black box AI. Is a medical product implemented as black-box AI eligible for certification as a medical device? The exact interpretation of the requirements specified in the General Data Protection Regulation is currently the subject of legal debate.14

The Medical Device Regulation places manufacturers under the obligation to ensure the safety of medical devices. Among other specifications, Annex I to the regulation includes, , requirements concerning the repeatability, reliability and performance of medical devices (both for stand-alone software and software embedded into a medical device):

Devices that incorporate electronic programmable systems, including software, shall be designed to ensure repeatability, reliability and performance in line with their intended use. (MDR Annex I, 17.1)15

Compliance with general safety and performance requirements can be demonstrated by utilizing harmonized standards. Adherence to a harmonized standard leads to the assumption of conformity, whereby the requirements of the regulation are deemed to be fulfilled. Manufacturers can thus validate artificial intelligence models in accordance with the ISO 13485:2016 standard, which, among other requirements, describes the processes for the validation of design and development in clause 7.3.7.

For machine learning two independent sets of data must be considered. In the first step, one set of data is needed to train the AI model. Subsequently, another set of data is necessary to validate the model. Validation of the model should use independent data and can also be performed by cross-validation in the meaning of the combined use of both data sets. However, it must be noted that AI models can only be validated using an independent data set. Now, which ratio is recommended for the two sets of data? This is not an easy question to answer without more detailed information about the characteristics of the AI model. A look at the published literature (state of the art) recommends a ratio of approx. 80% training data to approx. 20% validation data. However, the ratio being used depends on many factors and is not set in stone. The notified bodies will continue to monitor the state of the art in this area and, within the scope of conformity assessment, also request the reasons underlying the ratio used.

Another important question concerns the number of data sets. As the number of data sets depends on the following factors, this issue is difficult to assess, depending on:

Generally, the larger the number of data the more performant the model can be assumed to work. In their publication on speech recognition, Banko and Brill from Microsoft state, After throwing more than one billion words within context at the problem, any algorithm starts to perform incredibly well16

At the other end of the scale, i.e. the minimum number of data sets required, computational learning theory offers approaches for estimating the lower threshold. However, general answers to this question are not yet known and these approaches are based on ideal assumptions and only valid for simple algorithms.

Manufacturers need to look not only at the number of data, but also at the statistical distribution of both sets of data. To prevent bias, the data used for training and validation must represent the statistical distribution of the environment of application. Training with data that are not representative will result in bias. The U.S. healthcare system, for example, uses artificial intelligence algorithms to identify and help patients with complex health needs. However, it soon became evident that where patients had the same level of health risks, the model suggested Afro-Americans less often for enrolment in these special high-risk care management programs.17 Studies carried out by Obermeyer, et al. showed the cause for this to be racial bias in training data. Bias in training data not only involves ethical and moral aspects that need to be considered by manufacturers: it can also affect the safety and performance of a medical device. Bias in training data could, for example, result in certain indications going undetected on fair skin.

Many deep learning models rely on a supervised learning approach, in which AI models are trained using labelled data. In cases involving labelled data, the bottleneck is not the number of data, but the rate and accuracy at which data are labeled. This renders labeling a critical process in model development. At the same time, data labelling is error-prone and frequently subjective, as it is mostly done by humans. Humans also tend to make mistakes in repetitive tasks (such as labelling thousands of images).

Labeling of large data volumes and selection of suitable identifiers is a time- and cost-intensive process. In many cases, only a very minor amount of the data will be processed manually. These data are used to train an AI system. Subsequently the AI system is instructed to label the remaining data itselfa process that is not always error-free, which in turn means that errors will be reproduced.7 Nevertheless, the performance of artificial intelligence combined with machine learning very much depends on data quality. This is where the accepted principle of garbage in, garbage out becomes evident. If a model is trained using data of inferior quality, the developer will also obtain a model of the same quality.

Other properties of artificial intelligence that manufacturers need to take into account are adversarial learning problems and instabilities of deep learning algorithms. Generally, the assumption in most machine learning algorithms is that training and test data are governed by identical distributions. However, this statistical assumption can be influenced by an adversary (i.e., an attacker that attempts to fool the model by providing deceptive input). Such attackers aim to destabilize the model and to cause the AI to make false predictions. The introduction of certain adversarial patterns to the input data that are invisible to the human eye causes major errors of detection to be made by the AI system. In 2020, for example, the security company McAfee demonstrated their ability to trick Teslas Mobileye EyeQ3 AI System into driving 80 km/h over the speed limit, simply by adding a 5 cm strip of black tape to a speed limit sign.24

AI methods used in the reconstruction of MRT and CT images have also proved unstable in practice time and again. A study investigating six of the most common AI methods used in the reconstruction of MRT and CT images proved these methods to be highly unstable. Even minor changes in the input images, invisible to the human eye, result in completely distorted reconstructed image.18 The distorted images included artifacts such as removal of tissue structures, which might result in misdiagnosis. Such an attack may cause artificial intelligence to reconstruct a tumor at a location where there is none in reality or even remove cancerous tissue from the real image. These artifacts are not present when manipulated images are reconstructed using conventional algorithms.18

Another vulnerability of artificial intelligence concerns image-scaling attacks. This vulnerability has been known since as long ago as 2019.19 Image-scaling attacks enable the attacker to manipulate the input data in such a way that artificial intelligence models with machine learning and image scaling can be brought under the attackers control. Xiao et al., for example, succeeded in manipulating the well-known machine-learning scaling library, TensorFlow, in such a manner that attackers could even replace complete images.19 An example of such an image-scaling attack is shown in Figure 3. In this scaling operation, the image of a cat is replaced by an image of a dog. Image-scaling attacks are particularly critical as they can both distort training of artificial intelligence and influence the decisions of artificial intelligence trained using manipulated images.

Adversarial attacks and stability issues pose significant threats to the safety and performance of medical devices incorporating artificial intelligence. Especially concerning is the fact that the conditions of when and where the attacks could occur, are difficult to predict. Furthermore, the response of AI to adversarial attacks is difficult to specify. If, for instance, a conventional surgical robot is attacked, it can still rely on other sensors. However, changing the policy of the AI in a surgical robot might lead to unpredictable behavior and by this to catastrophic (from a human perspective) responses of the system. Methods to address the above vulnerabilities and reduce susceptibility to errors do exist. For example, the models can be subjected to safety training, making them more resilient to the vulnerabilities. Defense techniques such as adversarial training and defense distillation have already been practiced successfully in image reconstruction algorithms.21 Further methods include human-in-the-loop approaches, as humans performance is strongly robust against adversarial attacks targeting AI systems. However, this approach has limited applicability in instances where humans can be directly involved.25

Although many medical devices using artificial intelligence have already been approved, the regulatory pathways in the medtech sector are still open. At present no laws, common specifications or harmonized standards exist to regulate AI application in medical devices. In contrast to the EU authorities, the FDA published a discussion paper on a proposed regulatory framework for artificial intelligence in medical devices in 2019. The document is based on the principle of risk management, software-change management, guidance on the clinical evaluation of software and a best-practice approach to the software lifecycle.20 in 2021, the FDA published their action plan on furthering AI in medical devices. The action plan consists of five next steps, with the foremost being to develop a regulatory framework explicitly for change control of AI, a good machine learning practice and new methods to evaluate algorithm bias and robustness 26

In 2020 the European Union also published a position paper on the regulation of artificial intelligence and medical devices. The EU is currently working on future regulation, with a first draft expected in 2021.

Chinas National Medical Products Administration (NMPA) published the Technical Guiding Principles of Real-World Data for Clinical Evaluation of Medical Devices guidance document. It specifies obligations concerning requirements-analysis, data collection and processing, model definition, verification, and validation as well as post-market surveillance.

Japans Ministry of Health, Labour and Welfare is working on a regional standard for artificial intelligence in medical devices. However, to date this standard is available in Japanese only. Key points of assessment are plasticity the predictability of models, quality of data and degree of autonomy. 27

In Germany, the Notified Bodies have developed their own guidance for artificial intelligence. The guidance document was prepared by the Interest Group of the Notified Bodies for Medical Devices in Germany (IG-NB) and is aimed at providing guidance to Notified Bodies, manufacturers and interested third parties. The guidance follows the principle that the safety of AI-based medical devices can only be achieved by means of a process-focused approach that covers all relevant processes throughout the whole life cycle of a medical device. Consequently, the guidance does not define specific requirements for products, but for processes.

The World Health Organization, too, is currently working on a guideline addressing artificial intelligence in health care.

Artificial intelligence is already used in the medtech sector, albeit currently somewhat sporadically. However, at the same time the number of AI algorithms certified as medical devices has increased significantly over the last years.28 Artificial intelligence is expected to play a significant role in all stages of patient care. According to the requirements defined in the Medical Device Regulation, any medical device, including those incorporating AI, must be designed in such a way as to ensure repeatability, reliability and performance according to its intended use. In the event of a fault condition (single fault condition), the manufacturer must implement measures to minimize unacceptable risks and reduction of the performance of the medical device (MDR Annex I, 17.1). However, this requires validation and verification of the AI model.

Many of the AI models used are black-box models. In other words, there is no transparency in how these models arrive at their decisions. This poses a problem where interpretability and trustworthiness of the systems are concerned. Without transparent and explainable AI predictions, the medical validity of a decision might be doubted. Some current errors of AI in pre-clinical applications might fuel doubts further. Explainable and approvable AI decisions are a prerequisite for the safe use of AI on actual patients. This is the only way to inspire trust and maintain it in the long term.

The General Data Protection Regulation demands a high level of protection of personal data. Its strict legal requirements also apply to processing of sensitive health data in the development or verification of artificial intelligence.

Adversarial attacks aim at influencing artificial intelligence, both during the training of the model and in the classification decision. These risks must be kept under control by taking suitable measures.

Impartiality and fairness are important, safety-relevant, moral and ethical aspects of artificial intelligence. To safeguard these aspects, experts must take steps to prevent bias when training the system.

Another important question concerns the responsibility and accountability of artificial intelligence. Medical errors made by human doctors can generally be traced back to the individuals, who can be held accountable if necessary. However, if artificial intelligence makes a mistake the lines of responsibility become blurred. For medical devices on the other hand, the question is straightforward. The legal manufacturer of the medical device incorporating artificial intelligence must ensure the safety and security of the medical device and assume liability for possible damage.

Regulation of artificial intelligence is likewise still at the beginning of development involving various approaches. All major regulators around the globe have defined or are starting to define requirements for artificial intelligence in medical devices. A high level of safety in medical devices will only be possible with suitable measures in place to regulate and control artificial intelligencebut this must not impair the development of technical innovation.

Follow to Page 2 for References.

The Chinese government is investing heavily in the development of new technologies that leverage AI.

If youre looking to market your medical device, there are many tasks to complete.

The term "Big Data" is a few years old, but its implications for medical devices

The race to apply AI to medical treatment is rapidly accelerating in China and Japan.

Go here to see the original:
AI in MedTech: Risks and Opportunities of Innovative Technologies in Medical Applications - MedTech Intelligence

Read More..

Three individuals with Stanford affiliations named 2021 Knight-Hennessy Scholars – Stanford Today – Stanford University News

By Kathleen J. Sullivan

The Knight-Hennessy Scholars program, which funds graduate study at Stanford, last week announced its 2021 cohort, which includes three individuals with Stanford affiliations.

The scholars the fourth Knight-Hennessy Scholars cohort will also participate in the King Global Leadership Program, which strives to develop inspiring, visionary leaders who have strong cross-cultural perspectives and are committed to the greater good.

The incoming cohort of 76 scholars from around the world will join graduate programs during the 2021-22 academic year in every Stanford School: Business, Earth, Education, Engineering, Humanities and Sciences, Law and Medicine.

The three incoming scholars with Stanford affiliations are: Joy Hsu, 20, who is pursuing a masters degree in computer science, Olivia Martin, 19, and Nancy Xu, 19.

Joy Hsu

Joy Hsu, who is from Hualien, Taiwan, will pursue a PhD in computer science, with a focus on artificial intelligence and computer vision, in the School of Engineering.

Joy Hsu (Image credit: Courtesy Knight-Hennessy Scholars)

She aspires to one day become a computer science professor, as well as an advisor to local and national governments on policies regarding artificial intelligence.

I was stunned and incredibly grateful to be named a Knight-Hennessy Scholar, Hsu said. Im excited to join and learn from such a diverse cohort.

Hsu, who earned a bachelors degree with honors in computer science in 2020, is currently pursuing a masters degree with distinction in research in computer science, with concentrations in artificial intelligence and biocomputing.

She is a researcher at the Medical AI and Computer Vision Lab and the SLAC National Accelerator Lab, where she creates machine learning algorithms for unsupervised structure discovery in electron tomography.

At Stanford, Hsu helped organize TreeHacks, which invites college students around the world to turn crazy ideas into real projects. She also volunteered at Girls Teaching Girls to Code, a program designed to inspire high school students to pursue careers in computer science. In addition, she served in the student mental health and wellness cabinet in the Associated Students of Stanford University.

In 2021, Hsu was awarded a National Science Foundation Graduate Research Fellowship and a National Defense Science and Engineering Graduate Fellowship for her proposal on unsupervised learning in the computer vision domain.

Currently, Hsu is a technology policy associate in the Mayors Office of Technology and Innovation in San Jose, California, which is leveraging technology to address the most pressing issues facing the city.

Olivia Martin

Olivia Martin, who is from San Diego, California, will pursue a JD at Stanford Law School and a PhD in economics, with a focus on public economics and administrative law, at the School of Humanities and Sciences.

Olivia Martin (Image credit: Courtesy Knight-Hennessy Scholars)

She aspires to help governments better collect and use data to design and implement sustainable, equitable and evidence-based policy.

Martin said she was shocked and thrilled to be named a Knight-Hennessy Scholar.

I feel extremely honored to have this opportunity, she said.

Martin, who earned a bachelors degree in economics at Stanford in 2019, received an Anna Laura Myers Prize for Outstanding Honors Thesis in Economics for her honors thesis, titled Understanding the Geography of Housing Instability: Eviction and Affordable Housing Development.

During her senior year at Stanford, Martin served as the chair of Stanford in Government, a non-partisan, student-led affiliate of the Haas Center for Public Service.

For two years, Martin tutored two middle school students through the East Palo Alto Tennis & Tutoring program, whose mission is to change the life trajectory of local youth and their families through academic support, parent empowerment and tennis lessons.

In the spring of 2019, the Brown University Journal of Philosophy, Policy, and Economics, a peer-reviewed academic journal for undergraduate and graduate students, published Martins paper, A Fair Free Lunch? Reconciling Freedom and Reciprocity in the Context of Universal Basic Income.

Currently, Martin is a research manager at USAFacts, a not-for-profit, nonpartisan, civic initiative dedicated to increasing the availability of government data to drive fact-based discussion. Since graduating from Stanford, she also helped develop a new portfolio of talent-related investments for Ballmer Group, which supports efforts to improve economic mobility for children and families in the United States.

Nancy Xu

Nancy Xu, who is from Fremont, California, will pursue a masters degree in business administration at the Graduate School of Business and a PhD in computer science at the School of Engineering.

Nancy Xu (Image credit: Courtesy Knight-Hennessy Scholars)

She aspires to develop scalable artificial intelligence systems that will benefit society-at-large, particularly by learning and automating complex tasks and processes.

Xu said she was beyond grateful to family and friends for their support over the years.

I look forward to helping create the transformational impact that artificial intelligence will have on our society in the next few years, she said.

At Stanford, Xu earned bachelors degrees in mathematics and computer science with honors in 2019.

Xu is co-author of The Kipoi repository accelerates community exchange and reuse of predictive models for genomics, which was published in May 2019 in Nature Biotechnology.

She served as president of Stanford Women in Computer Science, a student organization that supports and promotes the growing number of women in computer science and technology, and as president of the Stanford Association for Computing Machinery.

Xu is a founder and former editor of The Gradient, a digital magazine focused on the latest research and developments in artificial intelligence and machine learning founded in 2017 by students and researchers at the Stanford Artificial Intelligence Laboratory.

Since graduating from Stanford, Xu has worked for several companies, including Alpha Health (now known as AKASA), where she created new products to help streamline healthcare processes such as scanning patient information from insurance cards, and at Illumina Inc., where she helped create a consortium of hospitals, research institutions and clinics dedicated to gaining a deeper understanding of the human genome.

More:

Three individuals with Stanford affiliations named 2021 Knight-Hennessy Scholars - Stanford Today - Stanford University News

Read More..

Governor Murphy Announces New Jersey Department of Education Grants to Create Computer Science Learning Hubs Throughout State – InsiderNJ

TRENTON Building on his Computer Science for All initiative, Governor Phil Murphy today announced that three universities will receive grants from the New Jersey Department of Education (DOE) to create computer science learning hubs throughout the state.

TheExpanding Access to Computer Science: Professional Learning Grantswill help Fairleigh Dickinson University, Kean University, and Rutgers University in New Brunswick create hubs that will provide high-quality professional learning for educators and resources for school districts to increase computer science opportunities for students. The grantswhich are funded through the Fiscal Year 2021 Appropriations Actwill also help the three universities build partnerships with stakeholders to promote the growth of computer science education.

New Jersey is committed to ensuring our students have access to a high-quality education in computer science that will open up doors for them in the future,said Governor Phil Murphy.The learning hubs will provide opportunities for educators to be on the forefront of computer science education, and to share that knowledge to students in the classroom. These efforts will contribute to the academic growth of our students and the economic growth of our State.

Its our vision that New Jersey schools will help prepare students for success in this knowledge-based economy,said Dr. Angelica Allen-McMillan, Acting Commissioner of Education.This initiative will help toward the goal of providing equitable access to high-quality computer science education.

The DOE estimates that the learning hubs will lead to approximately 3,000 students receiving equitable high-quality computer science education during the grant period, which runs until August 31, 2022.

The grants support the vision in Governor MurphysComputer Science State Plan, which details the States approach to supporting and expanding equitable access to high-quality computer science education for all K-12 students.

We applaud Governor Murphy for his commitment to providing a larger and more diverse set of students access to computer science courses, which are fundamental for 21st century careers,said Trevor Packer, head of the AP Program.New Jerseys new teacher training hubs will help make such courses available in every New Jersey school.

Preparing and supporting teachers is essential to expanding access computer science education,said Hadi Partovi, Founder and CEO of Code.org. Congratulations to New Jersey for taking this important step to provide more students in the state the opportunity to learn and explore computer science.

In order to effectively implement Governor Murphys Computer Science Action plan we need to make sure that we provide support and professional development for thousands of K-12 teachers throughout New Jersey,said Daryl Detrick, co-Director of the CS4NJ Coalition.The development of the Computer Science Teaching Hubs is a huge step in theright direction. The CS4NJ Coalition looks forward to continuing to work with the Governors office and DOE to ensure all students in NJ have access to high quality and equitable computer science education that opens doors of opportunity.

The grant awards include:

See the original post here:

Governor Murphy Announces New Jersey Department of Education Grants to Create Computer Science Learning Hubs Throughout State - InsiderNJ

Read More..

What does it take to become an astronaut? – Livescience.com

It's the dream of so many children to become an astronaut to break free of gravity, float above the Earth and travel the cosmos. For many, this dream fades by adulthood. But for some, this elusive career will always be a goal.

So, what does it take to become an astronaut?

First, to be a candidate, you usually must be a citizen of a country thats a member of a space agency. To sign up with NASA, for example, you must be a U.S. citizen. However, some private space companies may recruit astronauts without regard to their citizenship.

Related: Why is space a vacuum?

Many qualifications, such as education, are similar across space agencies. To apply to be an astronaut with the European Space Agency (ESA), for example, you need a master's degree or higher in the natural sciences, medicine, engineering, mathematics or computer science, or you need an experimental test pilot degree, which teaches graduates how to pilot aircraft that are being tested and how to manage research programs. NASA has the same requirements but also allows two years toward a doctorate in these subjects.

A degree isn't enough, though. To meet candidate requirements, applicants also need real-world experience at least two years of relevant post-graduate experience in their field of study for NASA or three years for the ESA. NASAs requirement can also be met with 1,000 pilot-in-command hours aboard a jet. Because English is the language used on the International Space Station, you must be fluent. (Fluency in other languages, such as Russian, is an asset but not a requirement, according to the ESA.)

Astronauts must also have a passing health record. For example, ESA requires medical certification for a Private Pilot License or higher with the initial application, although you do not need to hold the license itself. NASA candidates must be able to pass a long-duration flight astronaut physical. "Typically, as we near the end of the selection process, we put them through the same evaluation process that we would use for assigning a current astronaut to a mission, just to make sure that they would be eligible for a spaceflight assignment," said Anne Roemer, astronaut selection manager at NASA.

In the past, most physical disabilities would have disqualified a person from being an astronaut. But ESA has launched the Parastronaut Feasibility Project to recruit at least one astronaut with short stature, or under 4 feet, 3 inches (130 centimeters); a pronounced leg length difference; or lower limb deficiency, such as amputation at the knee. The agency will work with this astronaut to determine what alterations the space agency needs to make to existing protocols to send this person to space.

Mental health is just as important as physical health. Astronauts work long hours in high-stress situations. They are away from their friends and family for months at a time, and communication with those on Earth can be challenging. For instance, on the International Space Station, email is available and astronauts can make video calls, but they can only receive audio on their end and calls have a few seconds of lag. For missions to Mars, communicating with family back home would likely be more difficult. Instead, astronauts are stuck in small, enclosed areas with no real way to get alone time.

Related: Where is the center of the universe?

"During the selection process, we will test, through psychometric testing and other tools, the mental stability of the person, particularly with respect to if there are any red flags that go up," such as psychiatric disorders, said Dagmar Boos, head of ESA's Competence and Policy Centre. This mental stability is important for both the individual astronauts and the safety of the team as a whole, Boos said.

Those are the minimum requirements, but it takes much more to be selected as an astronaut. More than 18,000 people applied to NASA's astronaut class of 2017, but only 12 were chosen. Candidates must be truly impressive to stand out from the crowd.

One quality that the selection team looks for is the ability to be both a leader and a follower. Experience working in extreme environments, like the North Pole or the desert, can further woo the judges, Boos said. She also looks for people who have had responsibility over the lives of others, such as by being part of a rescue team.

In addition to flying in space, astronauts have technical roles on Earth and are the faces of the spaceflight program, so they have to be able to work in a range of contexts. "We're looking for well-rounded people across the board," Roemer said. "That can include career accomplishments, hobbies and interests."

Finally, astronauts must be easy to work with. "The goal is eventually to go to Mars, which is a fairly long mission," Roemer said. "They're trying to assess, could I be locked in a tin can with this person and ensure that we have a successful mission?"

Originally published on Live Science.

Here is the original post:

What does it take to become an astronaut? - Livescience.com

Read More..

2021 Milestone Years of Service Recognition – Syracuse University News

The following members of the Syracuse University community are recognized for achieving Years of Service milestones in 2020:

Jeurje Alamir, Facilities ServicesKathryn Allen, School of Information StudiesSuzanne Baldwin, Department of Earth and Environmental Sciences, College of Arts and SciencesBruce Baehr, Facilities ServicesTheresa Bathen, AthleticsTeresa Battisto, Falk College of Sport and Human DynamicsKaren Baum, Newhouse School of Public CommunicationsCathy Bottari, Office of Human ResourcesJames Byrne, Department of Public Health, Student Services, Falk College of Sport and Human DynamicsDenneva Calkins, Department of Transmedia, College of Visual and Performing ArtsLinda Carty, Department of African American Studies, College of Arts and SciencesKelley Champa, Chancellors HouseMarcelina Chavez, Facilities ServicesBiao Chen, Department of Electrical Engineering and Computer Science, College of Engineering and Computer ScienceJonathan Cheney, College of Arts and SciencesJill Clarke, Facilities ServicesSusan Clayton, Whitman School of ManagementDan Coman, Department of Mathematics, College of Arts and SciencesNatasha Cooper, Syracuse University LibrariesJoanne Craner, Newhouse School of Public CommunicationsJanice Darmody, Facilities ServicesRandy Dearborn, Office of AdmissionsJames Devereaux, Fire and Life Safety ServicesWilliam DiCosimo, School of Music, College of Visual and Performing ArtsDeborah Dohne, School of Art, College of Visual and Performing ArtsKathryn Everly, Department of Languages, Literatures and Linguistics, College of Arts and SciencesEllen Fallon, Writing Program, College of Arts and SciencesTracy Feocco, Newhouse School of Public CommunicationsPriyantha Fernando, Institute for Veterans and Military FamiliesRobert Finnegan, Facilities ServicesPaul Fitzgerald, Department of Earth and Environmental Sciences, College of Arts and SciencesMichael Frasciello, University CollegeRose Frenza, Office of the RegistrarMichael Fudge Jr., School of Information StudiesPeter Giovinazzo, Enterprise Application SystemsHolly Greenberg, School of Art, College of Visual and Performing ArtsMarjorie Greeson, Treasurers OfficeMary Pat Grzymala, Facilities ServicesNancy Hard, Syracuse AbroadDenise Heckman, School of Design, College of Visual and Performing ArtsWilliam Hicks Jr., AthleticsMonica Hobika, Office of Human ResourcesJanet Hyde, Office of Student LivingLinda Ivany, Department of Earth and Environmental Sciences, College of Arts and SciencesKelly Jarvi, Department of Mathematics, College of Arts and SciencesTazim Kassam, Department of Religion, College of Arts and SciencesJoseph Kehn, Materials DistributionTina Kelly, Cash OperationsKathleen Kenny, College of Arts and SciencesMaureen OConnor Kicak, School of Information StudiesMary Kiernan, Department of Food Studies, Falk College of Sport and Human DynamicsHolly Kingdeski, Financial Aid ServicesBruce Kingma, School of Information StudiesJudith Kopp, Center for Disability ResourcesLinda Koser, Department of Public SafetyMaureen Kozlowski, Office of Student LivingAmy Kwasigroch, Syracuse University LibrariesEunkyu Lee, Whitman School of ManagementPhyllis Liszewski, Advancement and External AffairsWendy Lockwood, Office of Human ResourcesSuzanne Loguidice, Treasurers OfficePhillip Lynd, Facilities ServicesPaula Maxwell, School of EducationJennie McLaughlin, University CollegeMark Meyer, Facilities ServicesMark Monette, Facilities ServicesKaren Nadolski, Marketing and CommunicationsShannon Nanda, Whitman School of ManagementKelly Needham, Newhouse School of Public CommunicationsLeonese Nelson, College Preparation ProgramsLisa Nicholas, Tepper in NYC ProgramMarilyn Niland, Health ServicesJae Oh, Department of Electrical Engineering and Computer Science, College of Engineering and Computer ScienceJohn Olson, Syracuse University LibrariesHana Palmer, Facilities ServicesJoseph Pellegrino, Department of Communication Sciences and Disorders, College of Arts and SciencesJames Perkins, Facilities ServicesThomas Perreault, Department of Geography, Maxwell School of Citizenship and Public AffairsKelly Pettingill, Falk College of Sport and Human DynamicsBradley Pike, AthleticsDavid Popp, Department of Public Administration and International Affairs, Maxwell School of Citizenship and Public AffairsJennifer Pulver, School of Information StudiesBrien Ryder, Facilities ServicesJosephine Scanlon, College of LawJeanne Schmidt, Department of Teaching and Leadership, School of EducationErin Levy Schaal, Facilities ServicesWendy Spadafora, Office of AdmissionsJames Spoelstra, College of Engineering and Computer ScienceChristopher Stewart, Campus Safety and Emergency ServicesMelinda Stoffel, Department of Public Health, Falk College of Sport and Human DynamicsTina Thompson, Bursars OfficeDeborah Toole, Department of Geography, Maxwell School of Citizenship and Public AffairsDavid Travers, Facilities ServicesMelissa Tucci, TelecommunicationsMary VanSkiver, Facilities ServicesMaureen Verone, Hendricks ChapelPadmal Vitharana, Whitman School of ManagementEric Wagner, Residential Safety, Department of Public SafetyKevin Wall, Office of the RegistrarCheryl Walsh, Syracuse University Day Care CenterJames Warne, Administrative ComputingGeoffrey Wemple, Facilities ServicesTheodore Woodruff, Facilities Services

Darle Balfoort, Syracuse University LibrariesCharles Brown Jr., Department of Physics, College of Arts and SciencesYvonne Buchanan, School of Art, College of Visual and Performing ArtsKaren Buffum, Food ServicesRonald Bunal, NetworkingDonald Buschmann, Syracuse StageAnthony Carbone, Syracuse University LibrariesDonald Carr, School of Design, College of Visual and Performing ArtsKelley Coleman, Campbell Institute, Maxwell School of Citizenship and Public AffairsTodd Conover, Department of Design, College of Visual and Performing ArtsJanet Coria, Department of Sociology, Maxwell School of Citizenship and Public AffairsLuvenia Cowart, Department of Public Health, Falk College of Sport and Human DynamicsJay H. Cox, Marketing and CommunicationsJulia Czerniak, School of ArchitectureLisa Dolak, Office of the Board of Trustees and College of LawDavid Driesen, College of LawBradley Ethington, School of Music, College of Visual and Performing ArtsAngela Flanagan, School of EducationThomas Foody, Facilities ServicesCatherine Gerard, Midcareer and Executive Education, Maxwell School of Citizenship and Public AffairsRobert Gerbin, Marketing and CommunicationsErika Haber, Department of Languages, Literatures and Linguistics, College of Arts and SciencesCathleen Hayduke, Sponsored AccountingEllen Hobbs, Graduate SchoolMary Lee Hodgens, Coalition of Museums and Art CentersJuanita Horan, Moynihan Institute, Maxwell School of Citizenship and Public AffairsJamie Jackson, Parking and Transit ServicesMary Kendrat, Campus Planning, Design and ConstructionDennis Kinsey, Department of Public Relations, Newhouse School of Public CommunicationsElzbieta Kosatka, Facilities ServicesHeather Labuz, Budget and PlanningRoger Lavin, Enterprise Application SystemsSteven Lux, Midcareer and Executive Education, Maxwell School of Citizenship and Public AffairsStephen Masiclat, Newhouse School of Public CommunicationsLinda Mathis, Newhouse School of Public CommunicationsCheri McEntee, Information Technology ServicesAlan Middleton, College of Arts and SciencesAnne Mosher, Department of Geography, Maxwell School of Citizenship and Public AffairsMelissa Perry, Catering ServicesSanford Peterson, Syracuse University LibrariesCristina Regan-Swift, Advancement and External AffairsThomas Roux, Syracuse University LibrariesJeffrey Rubin, School of Information StudiesFrancisco Sanin, School of ArchitectureRoy Simmons III, AthleticsTomasz Skwarnicki, Department of Physics, College of Arts and SciencesAdam Smith, Facilities ServicesCarrie Smith, School of Social Work, Falk College of Sport and Human DynamicsEvan Smith, Department of Television, Radio and Film, Newhouse School of Public CommunicationsAnnette Statum, Facilities ServicesShirley Trendowski, Food ServicesAnn Marie Trinca, Financial Aid ServicesMichael Veley, Department of Sport Management, Falk College of Sport and Human DynamicsPing Zhang, School of Information Studies

Mary Anagnost, Advancement and External AffairsJamie Base, Facilities ServicesChris Bolt, WAERJacqueline Borowve, Department of Religion, College of Arts and SciencesPaul Browning, Facilities ServicesGail Bulman, Department of Languages, Literatures and Linguistics, College of Arts and SciencesJeffrey Carnes, Department of Languages, Literatures and Linguistics, College of Arts and SciencesKimberly Charima, Disbursements ProcessingBarry Davidson, Department of Mechanical and Aerospace Engineering, College of Engineering and Computer ScienceMarion Dorfer, School of Design, College of Visual and Performing ArtsAndrew Keith Doss, Office of Veteran and Military AffairsGerald Edmonds, Office of the Associate Provost for Academic ProgramsJay Evans, Syracuse University Campus StoreMaryann Evans, Facilities ServicesSusan Cornelius Edson, AthleticsChristina Feikes, Writing Program, College of Arts and SciencesKatherine Fiedler, Falk College of Sport and Human DynamicsMaureen Fitzsimmons, Writing Program, College of Arts and SciencesSusan Fredericks, Midcareer and Executive Education, Maxwell School of Citizenship and Public AffairsMargaret Frey, Parking and Transit ServicesRonda Garlow, Midcareer and Executive Education, Maxwell School of Citizenship and Public AffairsDeborah Golia, Falk CollegeKristina Greene, Advancement and External AffairsPatricia Hennigan, Financial Aid ServicesJulianne Hughes, Stadium Operations ManagementNancy Italiano, Dining ServicesElizabeth Jeffrey, Office of Technology TransferBetty Johnson-Adair, Syracuse University LibrariesDarlene Kennedy, Enterprise Application SystemsColleen Kepler, Department of Languages, Literatures and Linguistics, College of Arts and SciencesLaura Lape, College of LawElisabeth Lasch-Quinn, Department of History, Maxwell School of Citizenship and Public AffairsSteven Leonard, Information SecurityEleanor Maine, Department of Biology, College of Arts and SciencesRobin Paul Malloy, College of LawDeb Monahan, School of Social Work, Falk College of Sport and Human DynamicsMichelle Mondo, Department of Teaching and Leadership, School of EducationNicole Morrissette-Ugoji, Food ServicesAvani Patankar, Food ServicesEric Patten, Enterprise Application SystemsRebecca Ponza, Environmental Health and Safety ServicesWilliam Poole, Facilities ServicesBeth Prieve, Department of Communication Sciences and Disorders, College of Arts and SciencesJames Ponzi, Food ServicesKrystal Porter, Mail ServicesLawrence Roux, Enterprise Application SystemsScott Samson, Department of Earth and Environmental Sciences, College of Arts and SciencesPauline Saraceni, Alumni EngagementMichael Scheftic, Core Infrastructure ServicesMark Svereika, Syracuse University LibrariesStephanie Surlock, Treasurers OfficeRobert Thompson, Department of Television, Radio and Film, Newhouse School of Public CommunicationsMurali Venkatesh, School of Information StudiesAndrew Vogel, Department of Mathematics, College of Arts and SciencesGarrett Wheeler-Diaz, Syracuse StageTheresa Whitlock, Food ServicesScott Wright, Facilities ServicesGale Youmell, Syracuse University Campus Store

Edward Bogucz, Department of Mechanical and Aerospace Engineering, College of Engineering and Computer ScienceErnest Colbourn Jr., Mail ServicesSean Corcoran, Department of Public SafetyBridget Crary, School of Information StudiesDuane Davis, Mail ServicesJoseph Downing, School of Music, College of Visual and Performing ArtsLisa Farnsworth, Department of Philosophy, College of Arts and SciencesLinda Stone Fish, Department of Marriage and Family Therapy, Falk College of Sport and Human DynamicsJames Fiumara, Facilities ServicesJohn Fiumara, Facilities ServicesLaura Gaul, Facilities ServicesGerald Greenberg, College of Arts and SciencesLisa Hairston, Food ServicesCarol Hamilton, Syracuse University LibrariesSheryl Hedrick, Food ServicesCan Isik, College of Engineering and Computer ScienceMichael Keenan, Food ServicesJay Lee, Department of Electrical Engineering and Computer Science, College of Engineering and Computer ScienceJohn Mangicaro, Learning Environments and Media ProductionAnastasia Marziale, General AccountingJeffrey Mertell, Department of Public SafetyJohn Miller, Facilities ServicesDiane Oad, Enterprise Application SystemsKara Patten, Academic Applications and Service CentersNancy Pelligrini, Food ServicesMichael Petroff, Mail ServicesAlice Pfeiffer, Syracuse University PressSusan Potter, Food ServicesDavid Regin, Dining ServicesJoseph Roth, Facilities ServicesJohn Sardino, Department of Public SafetyKenneth Schoening, Budget and PlanningSari Signorelli, Project AdvanceSofia Amparo Silva, Lubin House AdmissionsDeborah Skeele, Campus Facilities Administration and ServicesPatricia Sobotka, Office of the Board of TrusteesAngel Stevener, Facilities ServicesDona Hayes Storm, Department of Broadcast and Digital Journalism, Newhouse School of Public CommunicationsRobert Thompson III, Facilities ServicesPeter Vinette, NetworkingDavid West, Office of Admissons

Steven Adydan, Facilities ServicesCarmine Avella, Mail ServicesShobha Bhatia, Department of Civil and Environmental Engineering, College of Engineering and Computer ScienceStephen Brandt, Food ServicesJohn Desko, AthleticsVanessa Dismuke, Syracuse University LibrariesWilliam Dossert, College of Engineering and Computer ScienceDavid Fowler, Facilities ServicesMichael Harrison, Food ServicesJeffrey Hoone, Coalition of Museums and Arts CentersGwenn Judge, Budget and PlanningSusan Long, Transactional Records Access Clearinghouse, Newhouse School of Public CommunicationsAllen Myers, Food ServicesRobert Ogletree, Facilities ServicesS.P. Raj, Whitman School of ManagementWilliam Rizzo, Facilities ServicesAnthony Ross, Facilities ServicesJames Ryan, Office of AdmissionsStephen Sartori, Marketing and CommunicationsKim Sauer, PurchasingMichelle Scheider, Food ServicesMark Tewksbury, Food ServicesMaureen Thompson, Department of Public Health, Falk CollegeAdam Wright, Food Services

ML DeFuria, Graduate SchoolMargie Hughto, School of Art, College of Visual and Performing ArtsGary Kelder, College of LawBarbara Opar, Syracuse University LibrariesMelanie Stopyra, Marketing and CommunicationsAnne Walter, Syracuse University Day Care Center

Thomas Fondy, Department of Biology, College of Arts and SciencesW. Henry Lambright, Department of Public Administration and International Affairs, Maxwell School of Citizenship and Public Affairs

Dorothy Dottie Russell, Schine Dining

Go here to see the original:

2021 Milestone Years of Service Recognition - Syracuse University News

Read More..

East Stroudsburg University honors 119 employees with ceremony – Pocono Record

Special to the Pocono Record

East Stroudsburg University honored 119 employees for their service and dedication during the annual Employee Recognition Ceremony on May 3.This years ceremony recognized individuals who have supported the missions and goals ofESUfrom 10 to 40 years, and achieved their milestones in either 2020 or 2021.ESUdid not hold an Employee Recognition Ceremony in 2020 because of the COVID-19 pandemic.

It is great be able to celebrate and acknowledge such a nice cross-section of campus during this years recognition ceremony, and to do it in-person, saidESUInterim President Kenneth Long. Some of this years honorees teach in the broad range of disciplines across the curriculum, while others provide administrative support services or student services, make sure this campus is in pristine condition or serve the campus by keeping our technology running smoothly.The work by all of these individuals is important for both recruiting and retaining our students and successfully fulfilling the universitys mission.

40 Years of Service:Patrick Monaghan, residential and dining services; andMike Terwilliger, athletics

35 Years of Service:Curtiss Burton, instructional resources;Donald Cummings, exercise science; andPaul Lippert, communication

30 Years of Service:David Buckley, physics;Sally Duffy, financial aid;Kelly Harrison, athletic training;Wayne Heller, building maintenance;Geryl Kinsel, records and registration;Thomas LaDuke, biological sciences;William Loffredo, chemistry and biochemistry;Paul Schembari, mathematics;Jack Truschel, academic enrichment and learning;Nancy VanArsdale, English;Luis Vidal, instructional support

25 Years of Service:Alan Angulo, computing and communication services;Heather Burch, Aramark;James Capozzolo, instructional resources;Shala Davis, exercise science;Sussie Eshun, psychology;Kelly Felker, building care services;Jon Gold, chemistry and biochemistry;Nancy Jo Greenawalt, athletics;Roger Hammond, energy and plant services;Janine Hyde-Broderick, upward bound;Claranne Mathiesen, nursing;Mary Ann Lugo, student enrollment center;Kenneth Mash, political science and economics; Joni Oye-Benintende, art + design;Marie Reish, history and geography;Kim Sandt, business office;Keith Vanic, athletic training; andTracy Whitford, biological sciences;

20 Years of Service: David Bailey, university police and safety;Mary Devito, computer science;Clotilde Di Vitto, transfer center;Stephanie French, theatre;Sarah Goodrich, financial aid;Jan Hoffman, academic enrichment and learning;T. Michelle Jones-Wilson, chemistry and biochemistry;Richard Kelly, chemistry and biochemistry;Barbara Lehmann, Aramark;Kevin MacIntire, building care services;James Maroney, theatre;Roxann Nitschke, building care services;Leslie Raser, student accounts;Claudia Rodenhauser, Aramark;Helen Seidof, building care services;Lee Sidlosky, campus care;Kerry Siegfried, building care services;Gene White, physical education teacher certification;Paul Wilson, biological sciences;Sean Wright, admissions; andGrant Young, building maintenance

15 Years of Service:Paul Andricosky, building care services;Carlos Aussie, university police and safety;Fred Bernstein, computing and communication services;Christine Brett, physical education teacher certification;Olivia Carducci, mathematics;Marianne Cutler, sociology, social work and criminal justice;Caroline DiPipi-Hoy,special education, rehabilitation and human services;Sandra Eckard, English;L. Johan Eliasson, political science and economics;Ryan Fenical, campus care;Timothy Francis, Aramark;Andrew Johnson, mailroom, receiving and distribution center;Jonathan Keiter, mathematics;Steven LaBadie, university relations;Cynthia Leenerts, English;Linda Linker, Aramark;Robert Marmelstein, computer science;Gavin Moir, exercise science;Julie Monaghan, building care services;Shawn Munford, exercise science;Shokrollah Pazaki, sociology, social work & criminal justice;Lavar Peterson, computing and communication services;Tania Ramirez, facilities management;Ramon Seda, university police and safety;Beth Rajan Sockman, professional and secondary education;Gerard Rozea, athletic training; andLaura Waters, nursing

10 Years of Service:Denise Aylward, procurement;Nancy Boyer,ESUFoundation;Frankie Brea, campus care;Anna Mae Bush, Aramark;Zeynep Cagatay, Aramark;Millagros Casillas,graduationservices;Li-Ming Chiang, hospitality, recreation and tourism management;Nicole Chinnici, Dr. Jane Huffman Wildlife Genetics Institute;Melissa Cullum, Aramark;Rose Delorenzo, Aramark; Meagan DeWan, athletics;Christopher Dudley, history and geography;Douglas Friedman, business management;Robert Jenkins,graduationservices;Heon Kim, modern languages, philosophy and religion;Sandra Kizer, Aramark;Jason Lamond, Aramark;Sharon Lee, printing and duplication services;Donald Lynch, facilities management;David Mazure, art + design;Adam McGlynn, political science and economics;John Melchiori, energy and plant services;Annie Mendoza, modern languages, philosophy and religion;Patricia Mota, Aramark;Marcus Natt, Aramark;Andrea OBrien, university police and safety;Caitlin Ord, athletics;Van Reidhead, sociology, social work and criminal justice;Emily Sauers, exercise science;Laurie Schaller,ESUFoundation;Debra Seip, Aramark;Thadius Smith, building maintenance;Sarah Tundel, student enrollment center;Shawn Watkins, reading;Caryn Wilkie,ESUFoundation;Rachel Wolf-Colon, communication sciences and disorders; andLauren Worrell, human resources

See the original post here:

East Stroudsburg University honors 119 employees with ceremony - Pocono Record

Read More..

IBM and MIT kickstarted the age of quantum computing in 1981 – Fast Company

In May 1981, at a conference center housed in a chateau-style mansion outside Boston, a few dozen physicists and computer scientists gathered for a three-day meeting. The assembled brainpower was formidable: One attendee, Caltechs Richard Feynman, was already a Nobel laureate and would earn a widespread reputation for genius when his 1985 memoir Surely Youre Joking, Mr. Feynman!: Adventures of a Curious Character became a bestseller. Numerous others, such as Paul Benioff, Arthur Burks, Freeman Dyson, Edward Fredkin, Rolf Landauer, John Wheeler, and Konrad Zuse, were among the most accomplished figures in their respective research areas.

The conference they were attending, The Physics of Computation, was held from May 6 to 8 and cohosted by IBM and MITs Laboratory for Computer Science. It would come to be regarded as a seminal moment in the history of quantum computingnot that anyone present grasped that as it was happening.

Its hard to put yourself back in time, says Charlie Bennett, a distinguished physicist and information theorist who was part of the IBM Research contingent at the event. If youd said quantum computing, nobody would have understood what you were talking about.

Why was the conference so significant? According to numerous latter-day accounts, Feynman electrified the gathering by calling for the creation of a quantum computer. But I dont think he quite put it that way, contends Bennett, who took Feynmans comments less as a call to action than a provocative observation. He just said the world is quantum, Bennett remembers. So if you really wanted to build a computer to simulate physics, that should probably be a quantum computer.

For a guide to whos who in this 1981 Physics of Computation photo, click here. [Photo: courtesy of Charlie Bennett, who isnt in itbecause he took it]Even if Feynman wasnt trying to kick off a moonshot-style effort to build a quantum computer, his talkand The Physics of Computation conference in generalproved influential in focusing research resources. Quantum computing was nobodys day job before this conference, says Bennett. And then some people began considering it important enough to work on.

It turned out to be such a rewarding area for study that Bennett is still working on it in 2021and hes still at IBM Research, where hes been, aside from the occasional academic sabbatical, since 1972. His contributions have been so significant that hes not only won numerous awards but also had one named after him. (On Thursday, he was among the participants in an online conference on quantum computings past, present, and future that IBM held to mark the 40th anniversary of the original meeting.)

Charlie Bennett [Photo: courtesy of IBM]These days, Bennett has plenty of company. In recent years, quantum computing has become one of IBMs biggest bets, as it strives to get the technology to the point where its capable of performing useful work at scale, particularly for the large organizations that have long been IBMs core customer base. Quantum computing is also a major area of research focus at other tech giants such as Google, Microsoft, Intel, and Honeywell, as well as a bevy of startups.

According to IBM senior VP and director of research Dario Gil, the 1981 Physics of Computation conference played an epoch-shifting role in getting the computing community excited about quantum physicss possible benefits. Before then, in the context of computing, it was seen as a source of noiselike a bothersome problem that when dealing with tiny devices, they became less reliable than larger devices, he says. People understood that this was driven by quantum effects, but it was a bug, not a feature.

Making progress in quantum computing has continued to require setting aside much of what we know about computers in their classical form. From early room-sized mainframe monsters to the smartphone in your pocket, computing has always boiled down to performing math with bits set either to one or zero. But instead of depending on bits, quantum computers leverage quantum mechanics through a basic building block called a quantum bit, or qubit. It can represent a one, a zero, orin a radical departure from classical computingboth at once.

Dario Gil [Photo: courtesy of IBM]Qubits give quantum computers the potential to rapidly perform calculations that might be impossibly slow on even the fastest classical computers. That could have transformative benefits for applications ranging from drug discovery to cryptography to financial modeling. But it requires mastering an array of new challenges, including cooling superconducting qubits to a temperature only slightly above abolute zero, or -459.67 Farenheit.

Four decades after the 1981 conference, quantum computing remains a research project in progress, albeit one thats lately come tantalizingly close to fruition. Bennett says that timetable isnt surprising or disappointing. For a truly transformative idea, 40 years just isnt that much time: Charles Babbage began working on his Analytical Engine in the 1830s, more than a century before technological progress reached the point where early computers such as IBMs own Automated Sequence Controlled Calculator could implement his concepts in a workable fashion. And even those machines came nowhere near fulfilling the vision scientists had already developed for computing, including some things that [computers] failed at miserably for decades, like language translation, says Bennett.

I think was the first time ever somebody said the phrase quantum information theory.

In 1970, as a Harvard PhD candidate, Bennett was brainstorming with fellow physics researcher Stephen Wiesner, a friend from his undergraduate days at Brandeis. Wiesner speculated that quantum physics would make it possible to send, through a channel with a nominal capacity of one bit, two bits of information; subject however to the constraint that whichever bit the receiver choose to read, the other bit is destroyed, as Bennett jotted in notes whichfortunately for computing historyhe preserved.

Charlie Bennetts 1970 notes on Stephen Wiesners musings about quantum physics and computing (click to expand). [Photo: courtesy of Charlie Bennett]I think was the first time ever somebody said the phrase quantum information theory,' says Bennett. The idea that you could do things of not just a physics nature, but an information processing nature with quantum effects that you couldnt do with ordinary data processing.

Like many technological advances of historic proportionsAI is another examplequantum computing didnt progress from idea to reality in an altogether predictable and efficient way. It took 11 years from Wiesners observation until enough people took the topic seriously enough to inspire the Physics of Computation conference. Bennett and the University of Montreals Gilles Brassard published important research on quantum cryptography in 1984; in the 1990s, scientists realized that quantum computers had the potential to be exponentially faster than their classical forebears.

All along, IBM had small teams investigating the technology. According to Gil, however, it wasnt until around 2010 that the company had made enough progress that it began to see quantum computing not just as an intriguing research area but as a powerful business opportunity. What weve seen since then is this dramatic progress over the last decade, in terms of scale, effort, and investment, he says.

IBMs superconducting qubits need to be kept chilled in a super fridge. [Photo: courtesy of IBM]As IBM made that progress, it shared it publicly so that interested parties could begin to get their heads around quantum computing at the earliest opportunity. Starting in May 2016, for instance, the company made quantum computing available as a cloud service, allowing outsiders to tinker with the technology in a very early form.

It is really important that when you put something out, you have a path to deliver.

One of the things that road maps provide is clarity, he says, allowing that road maps without execution are hallucinations, so it is really important that when you put something out, you have a path to deliver.

Scaling up quantum computing into a form that can trounce classical computers at ambitious jobs requires increasing the number of reliable qubits that a quantum computer has to work with. When IBM published its quantum hardware road map last September, it had recently deployed the 65-qubit IBM Quantum Hummingbird processor, a considerable advance on its previous 5- and 27-qubit predecessors. This year, the company plans to complete the 127-qubit IBM Quantum Eagle. And by 2023, it expects to have a 1,000-qubit machine, the IBM Quantum Condor. Its this machine, IBM believes, that may have the muscle to achieve quantum advantage by solving certain real-world problems faster the worlds best supercomputers.

Essential though it is to crank up the supply of qubits, the software side of quantum computings future is also under construction, and IBM published a separate road map devoted to the topic in February. Gil says that the company is striving to create a frictionless environment in which coders dont have to understand how quantum computing works any more than they currently think about a classical computers transistors. An IBM software layer will handle the intricacies (and meld quantum resources with classical ones, which will remain indispensable for many tasks).

You dont need to know quantum mechanics, you dont need to know a special programming language, and youre not going to need to know how to do these gate operations and all that stuff, he explains. Youre just going to program with your favorite language, say, Python. And behind the scenes, there will be the equivalent of libraries that call on these quantum circuits, and then they get delivered to you on demand.

IBM is still working on making quantum computing ready for everyday reality, but its already worked with designers to make it look good. [Photo: courtesy of IBM]In this vision, we think that at the end of this decade, there may be as many as a trillion quantum circuits that are running behind the scene, making software run better, Gil says.

Even if IBM clearly understands the road ahead, theres plenty left to do. Charlie Bennett says that quantum researchers will overcome remaining challenges in much the same way that he and others confronted past ones. Its hard to look very far ahead, but the right approach is to maintain a high level of expertise and keep chipping away at the little problems that are causing a thing not to work as well as it could, he says. And then when you solve that one, there will be another one, which you wont be able to understand until you solve the first one.

As for Bennetts own current work, he says hes particularly interested in the intersection betweeninformation theory and cosmologynot so much because I think I can learn enough about it to make an original research contribution, but just because its so much fun to do. Hes also been making explainer videos about quantum computing, a topic whose reputation for being weird and mysterious he blames on inadequate explanation by others.

Unfortunately, the majority of science journalists dont understand it, he laments. And they say confusing things about itpainfully, for me, confusing things.

For IBM Research, Bennett is both a living link to its past and an inspiration for its future. Hes had such a massive impact on the people we have here, so many of our top talent, says Gil. In my view, weve accrued the most talented group of people in the world, in terms of doing quantum computing. So many of them trace it back to the influence of Charlie. Impressive though Bennetts 49-year tenure at the company is, the fact that hes seen and made so much quantum computing historyincluding attending the 1981 conferenceand is here to talk about it is a reminder of how young the field still is.

Originally posted here:

IBM and MIT kickstarted the age of quantum computing in 1981 - Fast Company

Read More..

Tusky Valley students ‘knock it out of the park’ with engineering projects – New Philadelphia Times Reporter

ZOARVILLE Rufus the robotic dog was one of the stars of a Project Lead The Way showcase held Friday at Tuscarawas Valley High School.

Normally, such showcases are community events, but it wasn't possible this year because of the COVID-19 pandemic. So instructor Paul Dunlap decided to stage an informal one for staff and students.

"I felt my students deserved to have the ability to present their concepts, ideas and projects to the community," he said.

Students developed a variety of projects for the showcase.

"My seniors have knocked it out of the park this year," Dunlap noted. "They created a robotic dog, a drone factory that builds and assembles drones from the milling state all the way through laser engraving to it takes off. My other students also created a pencil sorting factory, which sorts them based on color."

Tuscarawas Valley has offered Project Lead The Way classes for the past 12 years.

According to the Project Lead the Way website, the programhelps "students to develop in-demand, transportable knowledge and skills through pathways in computer science, engineeringand biomedical science."

Tuscarawas Valley offers the classes all the way down to kindergarten.

"So my daughter, who's a kindergartner, is learning how to program virtual robots. She absolutely loves it." Dunlap said.

At this year's showcase, the guest judges were six former Tuscarawas Valley students who are now studying engineering in college. They are Clint Spillman, Jake Rothenstein, Randall Winkhart, Gavin Perkowski and Jonathan Rennicker, all students at the University of Akron, and Seth Ramsey, a student at the Ohio State University.

They were all impressed with the projects they saw.

"Every year it seems like they improve on what the previous class did, and I didn't think that was possible," said Spillman, who is in his junior year at Akron. "They've blown us out of the water with the stuff they do. And I thought the stuffwe were doing was advanced, but this is even more advanced."

Added Rennicker, "I'm just in awe. They're amazing, very impressive that they're doing this in high school."

Rennicker, who is in his first year at Akron, said the Project Lead The Way classes at Tuscarawas Valley are way ahead of anything he's doing in college.

Some of the classes Spillman took at Tuscarawas Valley he retook in college, and they were exactly the same.

"I was very prepared, and it just made it so much easier to get through," he said. "You become a leader just because you can help everyone else in the class. They're so surprised with what you've done in high school."

He added that he was impressed by the robotic dog and how the students had programmed it.

Dunlap said he was fortunate to have his former students come back as judges.

"They all got a hold of me to come back, not the other way around," he said. "As soon as they hear something's going on, they want to come back and participate."

See the article here:

Tusky Valley students 'knock it out of the park' with engineering projects - New Philadelphia Times Reporter

Read More..