Page 723«..1020..722723724725..730740..»

Why right-wingers think a uniparty controls Congress – Vox.com

As Republican hardliners tossed Speaker Kevin McCarthy out of office and attempted to dictate his replacement, one word kept recurring in their complaints about existing GOP leaders: uniparty.

The term crystallizes an idea widespread on the MAGA right: that too many Republican politicians and especially leaders are, on key issues, aligned with Democrats and the Washington establishment, and working against Donald Trump and the right.

Right now, we are governed by a uniparty that Speaker McCarthy has fused with Joe Biden and Hakeem Jeffries, Rep. Matt Gaetz (R-FL) said last month, as McCarthy seemed set to keep the government funded and avoid a shutdown. This was the justification Gaetz gave for his push to oust McCarthy (though he may have had personal reasons as well). And since enough other House Republicans were dissatisfied with McCarthys handling of the spending battles, Gaetz succeeded.

One key outside ally for Gaetz was Steve Bannon, the former Trump aide and now commentator. Bannon frequently deploys the uniparty epithet, as hed done for years. Hes long tried to purge the GOP of its more conventional members, replacing them with hardliners who will more loyally back Trump and far-right causes.

In many ways, the idea that Kevin McCarthy was indistinguishable from a Democrat seems self-evidently absurd. The two parties are deeply polarized and locked in seemingly eternal partisan warfare. The GOP has moved far to the right on abortion, immigration, trans rights, gun rights, environmental regulation, and other issues while backing Trump ever more fervently.

Indeed, uniparty is an exaggerated, sloppily conceived concept thats often deployed as a way to blame the rights own failures to achieve a conservative policy paradise on some sort of dastardly conspiracy against them by their own leaders.

And yet sometimes its not entirely off-base.

Thats because there are important issues where many Republican elites have long thought the MAGA rights preferences are wrongheaded or downright dangerous and where those elites work, either openly or subtly, to ensure Trump and his acolytes dont get what they want.

These range from major foreign policy questions about the USs role in the world, to preferences about tactics in government spending battles, to issues at the heart of American democracy such as whether elections that Donald Trump loses should be certified.

Now, Trump and Gaetz are declaring the election of Mike Johnson as speaker of the House as a win for MAGA Mike. But will Johnson be able to transform the speakership? Or will he inevitably be drawn, by the institutional incentives of the job, toward governing more like McCarthy? Maybe you either die a MAGA hero, or you live long enough to see yourself become the uniparty.

Politicians and political commentators have long loved a good rhetorical flourish that pits them as plucky underdogs fighting for the interests of the common people against a dastardly, powerful cartel.

Depending on who is using the term, this cartel can be called any number of things: the deep state, the swamp, the special interests, the Blob, the Cathedral, or simply Washington. The commonality is the suggestion that theyre the people who are really in control, and who are therefore responsible for all the problems the country faces.

But uniparty is useful for those who want to say theres something rotten with the party theyd typically prefer. In 2000, that was leftist supporters of presidential candidate Ralph Nader, who complained that the Democratic Party had become functionally indistinguishable from the GOP. As Ben Zimmer wrote for Politico Magazine, online supporters of Nader disparaged the corporate UniParty, and Nader himself used the term in a book.

Conservatives, meanwhile had long slapped moderate Republican politicians with the label RINO, Republican in Name Only. That has a similar vibe to uniparty. But by the mid-2010s, many on the right felt frustrated and disillusioned with the GOP establishment. Complaints included GOP leaders openness to immigration reform and free trade, foreign policy failures exemplified by the Iraq war, the failure to drastically cut spending under President Barack Obama, and a general sense that the party simply didnt fight Democrats hard enough.

Trumps presidential campaign became the vessel for these frustrations. So commentators affiliated with the populist right, like Ann Coulter and Breitbart editor-in-chief Alexander Marlow, began denouncing Republican Trump critics (of which, back then, there were many) as the uniparty. In Coulters telling, this included the Republican Brain Trust, the Washington Establishment, the Insiders, ... the lobbyists, the consultants, the think tanks, [and] the pollsters.

Trump himself preferred to talk about the swamp and, once in office, the deep state likely because disparaging the Republican Party made little strategic sense for him once he was the leader of that party.

But once Bannon was ousted from Trumps White House, he started using the term again to denounce all the GOP establishment squishes who were undermining the MAGA agenda. Hes still doing so today and so, now, is Donald Trump Jr., who tweeted in July that Florida Gov. Ron DeSantiss primary campaign was the Uniparty vs. Trump & MAGA.

The framing of Trump as inexorably opposed to a hostile GOP establishment is oversimplified and out of date on many issues. As president, Trump happily embraced conventional Republican policies on many issues (tax cuts, judicial appointments, rolling back regulations) while the GOP establishment moved in his direction on others (party elites largely abandoned their longtime support for immigration reform and free trade deals). Trump is perfectly comfortable with big business and big donors, and did little during his presidency to challenge their power. Many, if not most, leading Republicans now see themselves as fully on the Trump team.

And yet its still true that a core of Republican elites has major temperamental, tactical, and substantive differences with Trump and the right sometimes to the point where they really do seem more aligned with Democrats, and to be working against the right either openly or subtly.

Foreign policy: These differences are perhaps most intense on foreign policy. Trump has made clear that he supports massively overhauling US foreign policy. Hes talked frequently about withdrawing from NATO, pulling back US troops from deployments abroad, and generally playing a less active role in world affairs. The latest flashpoint for this clash of visions is the Russian invasion of Ukraine, with the MAGA right becoming intensely opposed to aiding Ukraine further.

The traditional hawkish Republican elite has fiercely resisted these changes. While Trump was president, his defense secretaries regularly delayed or slow-walked his troop withdrawal orders. If Trump had actually tried anything like withdrawing from NATO while in office, he would have seen major resignations of top officials (though former Trump national security adviser John Bolton warns it may happen if he is elected to a new term). Senate Minority Leader Mitch McConnell has loudly championed Ukraines cause, and Kevin McCarthy has reportedly tried to find a way to get more Ukraine aid through the House despite right-wing opposition.

(It should be noted that a leftists conception of uniparty foreign policy would be rather different theyd point out hawkishness toward China, high levels of military spending, and support for Israel as areas where theres now little difference between the two parties. And Obama aide Ben Rhodes viewed that administrations foreign policy on Iran and the Middle East as an effort to push back against a Blob of entrenched establishment thinking. But the rights concept of a uniparty is just about issues where the establishment disagrees with them.)

Election theft: When Trump tried to steal the 2020 election from Biden, the Republican Party did not act in a disciplined, unified way to help him to do it much to his chagrin. Yes, many elected Republicans claimed to doubt Bidens wins in certain states and said they wanted them thrown out, and most who knew better did little to stop Trump. And several, including the new speaker of the House, actively tried to help him.

But key Republicans with positions of authority to affect the results governors, state legislative leaders, state election officials, Justice Department officials, judges, and the vice president overwhelmingly didnt use their formal powers to help Trump pull off the steal. The uniparty united around the shared belief that respecting the results of American elections and the peaceful transfer of power is important. Trump would like to stop that from happening again.

Government spending battles: Even before Trumps rise, many conservatives have long resented what they see as the GOP establishments willingness to cave to Democrats on spending policy, when they want far greater cuts. (Trump himself never staked too much on these fights while he was president when he brought on a shutdown, it was instead over trying to get more money for his border wall.)

After one such government spending deal in 2013, Angelo Codevilla, who would become a leading intellectual voice of the pro-Trump right, wrote: The Republican Partys leaders have functioned as junior members of Americas single ruling party, the UniParty. Whatever differences existed between then-congressional leaders, Republicans John Boehner and Mitch McConnell and Democrats Nancy Pelosi and Harry Reid, he said, got worked out behind closed doors.

GOP establishment leaders in Congress and on the appropriations committees generally profess that theyd love to cut spending more, but that the activists demands and their understanding of politics are simply absurdly unrealistic. They argue that the level of cuts demanded by the right would be deeply unpopular, that theres no way to force Democrats to cave when they control key levers of government, and that a prolonged government shutdown would hurt Republicans politically.

But the hardliners suspect all this is cover for a comfort with the status quo, and a lack of desire to truly disrupt Washington. And Gaetz used the latest government spending agreement between McCarthy and Democrats as a pretext to oust McCarthy from the speakership.

In a sense, the uniparty idea is an attempt to answer a question: Why are so many Republican elites still so resistant to following Trump or the base on key issues?

One theory, pushed by Bannon and Codevilla before him, is that its about the people: The wrong Republicans, lacking sufficient loyalty to Trump and the cause, are in these jobs. So if Trump is returned to power, his appointees should be more carefully chosen for loyalty to the MAGA cause, not just the GOP. Purportedly uniparty-aligned elected officials should be primaried and replaced with MAGA-friendly candidates.

Replacing McCarthy with Johnson a longtime conservative and Christian right activist who helped Trump try and steal the 2020 election is, in this thinking, a major step forward.

That surely has some truth to it, but its not the whole story. Because another view is that the supposed uniparty politicians are often responding to the institutional incentives and pressures of their roles and that even MAGA diehards in those roles will face the same incentives and pressures.

Notably when Trump was president, he regularly caved to the supposed uniparty. He could have overridden his appointees and forced quicker troop withdrawals, but he often didnt. He could have forced bigger fights about cutting spending, but he generally didnt. As president, with his political future and a whole agenda at play, he had to weigh priorities and calculate political blowback.

In government, its often said that where you stand depends on where you sit. Appointees to head government agencies typically become champions of their particular agencies priorities. Similarly, if youre a right-wing media commentator or a representative in a deep red district, your only real priority is to please a far-right audience, and you have no real responsibility to govern or achieve anything.

But if youre speaker of the House, you have different priorities. You have to manage the concerns of the vulnerable swing-district members on whom your majority depends. You have to cultivate big-money donors who fund your effort to keep that majority. And you actually have responsibility over policy.

One major tell about how this works will be seen in how Speaker Johnson approaches Ukraine aid. As a little-known Congress member in a deep red district, he frequently criticized aid to Ukraine. In May 2022, he said, We should not be sending another $40 billion abroad when our own border is in chaos, American mothers are struggling to find baby formula, gas prices are at record highs, and American families are struggling to make ends meet, without sufficient oversight over where the money will go.

But now, as speaker of the House, he was playing a different tune. We cant allow Vladimir Putin to prevail in Ukraine because I dont believe it would stop there, Johnson said on Fox News Thursday. Were not going to abandon them. The uniparty may have life in it yet.

Will you support Voxs explanatory journalism?

Most news outlets make their money through advertising or subscriptions. But when it comes to what were trying to do at Vox, there are a couple reasons that we can't rely only on ads and subscriptions to keep the lights on.

First, advertising dollars go up and down with the economy. We often only know a few months out what our advertising revenue will be, which makes it hard to plan ahead.

Second, were not in the subscriptions business. Vox is here to help everyone understand the complex issues shaping the world not just the people who can afford to pay for a subscription. We believe thats an important part of building a more equal society. We cant do that if we have a paywall.

Thats why we also turn to you, our readers, to help us keep Vox free. If you also believe that everyone deserves access to trusted high-quality information, will you make a gift to Vox today?

Yes, I'll give $5/month

Yes, I'll give $5/month

We accept credit card, Apple Pay, and Google Pay. You can also contribute via

Originally posted here:
Why right-wingers think a uniparty controls Congress - Vox.com

Read More..

Circuits in Session: Analysis of the Quality of ChatGPT4 as an … – JD Supra

This is the third and concluding article of the Circuits in Session series. The quality of GPT4s legal analysis is evaluated and both positive and negative test results are reported. It did process legal frameworks very well but struggled with nuanced understanding of facts and equitya significant limitation for real-world applications. The results of the experiments are encouraging to those who hope to use generative AI as a legal tool. Someday AI may even take over some tasks performed by human judges, if not replace them entirely, starting with appellate judges, but not today.

The first two articles in this series on AI as an Appellate Court Judge are: Circuits in Session: How AI Challenges Traditional Appellate Dynamics (10/13/23); Circuits in Session: Addendum and Elaboration of the Appellate Court Judge Experiment (10/25/23). The two experiments discussed in this series demonstrate impressive legal reasoning abilities. But the results did not support the hypotheses that GPT4 could serve as an appellate court judge in some cases. Note the experiments did not test legal research ability, only the ability to analyze and make just decisions.

The evidence showed GPT4 was good, but not great, and that means it was inadequate for the job. AI should only be used to improve the quality of dispute resolution, to uplift the bench, not lower the bar. Justice is critical to the proper functioning of society and should never be automated just for efficiency and economy. The conclusion elaborates on these thoughts and provides a vision of how generative AI may someday be used to uphold the quality and integrity of legal systems throughout the world.

High-Level Overview of Experiment Results

My hypothesis of the competence of GPT4 to serve as an appellate judge was not confirmed to my satisfaction because:

The first functional type of challenges can probably be fixed in just the few years. The second, however, which hinges on its correction of deficiencies of fact and equity evaluation, is much more challenging. It is hard to predict how long it may take to fix that, if ever. In my admittedly idealistic, pro-human way, that means we should not consider replacing, or even supplementing, human judges, until the AI version is at least as good, if not better, than the best humans. AI must be a path to excellence, not just less expensive mediocrity.

Some believe the second type of challenge, the quality challenge, is forever beyond the ability of artificial intelligence. If so, that means that AIs can never be as good as human judges, even on the highly intellectual appellate level. Time will tell. Just remember, they used to say the same thing about Chess and then Go, etc. If you assume it is possible, then the question is how long it may take. That depends on whether the LLM AI models like GPT4 can keep improving at the exponential rate they have shown in the last two years. That again is a matter of debate. Some, including Bill Gates, think it is already at or near a dead end. Bill Gates does not expect GPT-5 to be much better than GPT-4, (Decoder, 10/21/23). Others think that GP4 and other generative AIs are still in the early stages of exponential improvements.

The bigger question is whether the continued improvement in reasoning, approaching super-intelligence in this one ability, will ever give GPTs a proper appreciation of the facts and equity of a case. The AI has no real feelings, and, as to facts, has only a secondhand, abstract experience of reality. Will super-reasoning ever make up for that?

I for one think that it might, but the experiments in Circuits in Session show it is not there yet. It is not ready to replace Appellate Court judges today, even where facts are not disputed, as was true in the case experimented with, Brandi McKay vs. Miami-Dade County, 36 F.4th 1128 (11th Cir. June 9, 2022). Still, the overall results are encouraging, very encouraging. GPT is already a cutting-edge tool for legal analysis, albeit still challenging to use. Despite the problems encountered, AI can, if used properly, serve as a powerful tool of justice. It can make the work of human judges easier and more efficient. Someday, perhaps soon, it may be able to appreciate all of the facts and equites of a case, just like our most sensitive, fair minded human judges. Then it could be of much greater service.

Critique of ChatGPTs Judicial Analysis

Prediction of Outcome of the Appeal and Possible Dissent

I first asked GPT4 to predict the outcome of the McKay appeal. It correctly predicted the lower courts opinion would be affirmed. A correct prediction is, in itself, solid proof of competence, but I was looking for much more. I wanted to know the details of its legal reasoning.

The lower court found that the economic reality of Ms. McKays relationship with the County indicated that she was the primary beneficiary. She gained valuable skills and networking opportunities, which is consistent with the purpose of an internship. The duration of the internship was also not found to be grossly excessive, further supporting the volunteer classification.

That is the core reason provided, and it is right on point, although sone of the other language in the GPT memorandum not quoted is not helpful, although not incorrect. So I give the reasoning a high score, but not perfect. I was known as a hard grader in law school too.

You could say the prediction of an affirmation is not too surprising, in view of the statistical rarity of reversals, even just partial reversals. It only happens in eight percent (8%) of all appeals. Edwards, Why Appeals Courts Rarely Reverse Lower Courts: An Experimental Study to Explore Affirmation Bias (68 Emory L. J. Online 1035 (2019)). Since generative AI works through probability predictions, you could explain the overall prediction that way, but not the on-point explanation as to why it would likely be affirmed. Plus, the statistical prediction of a dissent goes well against the odds. This indicates real legal analysis. It also confirms that GPT4s score in the top ten percent (10%) of the multi-state Bar Exam was no fluke. Is it irrational exuberance to expect GPT5 to score in the top one percent (1%)?

In the second experiment GPT-4 predicted a 40% likelihood of dissent based on the assigned panel of Circuit Judges Jordan, Tjoflat and Pryor (Jill). Redo Revised Second Stage of the ChatGPT-4 Appeal Experiment. In the first experiment it predicted the likelihood of a dissent at an even more remarkable 55%. Circuits in Session: How AI Challenges Traditional Appellate Dynamics (10/13/23).

This shows a complete break from mere statistical based predictions because in fact only 2.6% of the Circuit appeal cases have a dissent. Epstein, Landes, Posner, Why (and When) Judges Dissent, Journal of Legal Analysis, Spring 2011: Volume 3, Number 1 at pg. 106. Moreover, my study using Lexis indicates that none of the three judges on this panel are particularly prone to dissents. The judges again are Jill Pryor (appointed by Obama), Adalberto Jordan (appointed by Clinton) and Gerald Tjoflat. It is important to understand that Gerald Bard Tjoflat is a remarkable senior judge. He was appointed as a Circuit Appeals Judge by President Ford in 1975 and retired in November 2019. He was born in 1929 and served as a counter-intelligence investigator at the end of the Korean War. He is well-known as one of the countries great jurists, arising from my home court in the Middle District of Florida. In 1995, the Duke Law Journal published a tribute to Judge Tjoflat. Tribute to Gerald Bard Tjoflat, Duke Law Journal, Vol 44:985 (1995). It included articles by then Chief Justice William H. Rehnquist, retired Justices Lewis F. Powell, Jr. and Byron R. White, and Judge Edward R. Becker.

Judge Tjoflats participation in Brandi McKay vs. Miami-Dade County, 36 F.4th 1128 (11th Cir. June 9, 2022) was as a voluntary retired judge, since, like most courts in the U.S., they do not have enough active service judges to hear the courts case load. The Eleventh Circuit is sometimes called one of the most conservative courts in the country. But see: The Eleventh Circuit Cleans Up the Mess (Lawfare, 09/22/22). If you were to pick an outcome along political lines, with little regard to legal reasoning, as sometime happens in other courts, you would predict a reversal here by Pryor and Jordan, against Tjoflat.

As mentioned in Circuits in Session: Addendum and Elaboration of the Appellate Court Judge Experiment (10/25/23), when prompted to guess which judge would be the most likely to dissent, it guessed, Judge Jill Pryor, given her judicial philosophy and past rulings on labor issues. That was the wrong guess, as the dissent was actually by Judge Adalberto Jordan. Based on my studies of the rulings of these judges in employment law, I suspect this is an error that many Eleventh Circuit employment law experts would have made, that many would have predicted Pryor over Jordan as a possible dissenter. See eg. Lewis v. City of Union City, 918 F.3d 1213, 1231 (11th Cir., 3/21/19) (Jill Pryor joined this unusually contentious dissent in summary judgment claims against city employee, whereas Jordan upheld ruling for employer); EEOC v. Catastrophe Mgmt. Sols., 876 F.3d 1273, 1279 (11th Cir., 12/5/17) (Pryor joined dissent and would grant en banc review of a denial of an employee discrimination claims, whereas Jordan upheld ruling). Villarreal v. R.J. Reynolds Tobacco Co., 839 F.3d 958, 973, 981 (11th Cir., 10/05/16) (Complex opinion where Jill Pryor joins in dissent to panel and favors employee in desperate impact discrimination case. Judge Jordan joins in separate, more limited dissent).

When asked to speculate as to why Judge Adalberto Jordan might object, the GPT response was again very good. I explained this before in the last article. Id. In the meantime, I have researched using Lexis the history of Judge Jordans opinions on employment law, and history of dissents. I found very few reversals and they are for remand to allow a jury to make a factual determination. See eg., Martin v. Fin. Asset Mgmt. Sys., 959 F.3d 1048, 1058 (11th Cir., 5/14/20) (Jordan dissents in part, and would reverse summary judgement and remand to jury for Title VII retaliation claim); Ojeda-Sanchez v. Bland Farms, LLC, 499 Fed. Appx. 897, 904 (11th Cir., 11/29/12) (Jordan dissents in part from ruling for employer in an FLSA case and would remand for a jury trial on a question of fact regarding time worked); Nelson v. MLB Hotel Manager, LLC, 2022 U.S. App. LEXIS 19298, 2022 WL 2733720 (11th Cir., 07/13/22) (Jordan affirms summary judgment for employer in an FLSA restaurant tip case. Opinion notable for the reprimand of counsel for misuse of sanction motions that often transforms a goodfaith legal dispute into an ugly mudslinging battle).

After digging deeper into the case law I am more impressed than ever regarding ChatGPTs analysis of the dissent. Maybe someday its capabilities will be expanded to allow it to study all cases cited and to provide its own citations. This shows GPT4 is ready to be a partial tool, but not ready to replace an appellate judges law clerk, much less the judge. Still, a very strong initial showing by GPT4.

Appellate Opinion Submitted for first time and Detailed Summaries and Analysis Provided

In the next stage of the experiment, I showed GPT4 the actual Eleventh Circuit opinion for the first time and asked for its analysis. The opinion is very interesting on many levels. Brandi McKay vs. Miami-Dade County, 36 F.4th 1128 (11th Cir. June 9, 2022). GPT4 was asked to provide a critical analysis of the majority opinion and any errors that were arguably made. I thought the response was good and would have to agree with it, even though I agree with the lower courts outcome granting summary judgment for the employer. Here is the AIs critique of the majority opinion.

Memorandum on Critical Analysis of the Majority Opinion

My only critique of GPTs criticism is that it did not mention the summary judgment standard, and did not point out genuine issues of material fact. This was the bottom line criticism of the dissent, but GPT4 did not seem to pick up on that. Still, when I next asked about the dissent, it did became the AIs lead point:

Emphasis on Jury Trial: One surprising element is Judge Jordans strong emphasis on the need for a jury trial to determine Ms. McKays employment status. This is a departure from the majoritys approach, which seemed content to make that determination at the judicial level.

So the AI got the point, but seemed surprised about it. Plus it was not specific regarding disputed facts requiring a remand. No law clerk or judge would be surprised or make this mistake in analysis of Rule 56. This error is an example of GPTs current data training handicap. It did not have a complete collection of case law, like West, Lexis or even smaller case law data sets like Justica, CaseText (now part of West) and Google Scholar. This can be easily fixed by expanding its training database and or tying into existing legal databases, which both Lexis and West are now doing. The technology of justice is certain to continue improving.

AI Provides Opinion and Analysis of How the Lower and Appellate Court Should Have Ruled

Now I asked GPT4 the key question of how it thought the lower court should have ruled. It began by correctly identifying the central issue in this case as whether Brandi McKay should be considered an employee under the Fair Labor Standards Act (FLSA) and thus entitled to compensation. Good start. Lawyers and judges all too often fail to grasp the key issue. GPT4 then goes on provide its analysis.

Independent Judgment and Analysis

That analysis, excepting again the omission of the admittedly secondary Rule 56 summary judgment factor, is, unquestionably correct. These are key factors for a court to consider to determine if McKay is an employee. My disagreement is with how GPT4 weighed these factors, not with the factors used. Here is GPT4s conclusion, its ultimate ruling in the McKay v. Miami-Dade County case.

Based on these factors, the correct holding of the district court should have been that Ms. McKay was an employee under the FLSA for the duration of her time in the FIPP program beyond the initial four-week educational period. She should be entitled to back pay calculated based on the minimum wage, as well as any applicable overtime pay.

The lack of a structured educational component, the economic benefits derived by Miami-Dade County, and the level of independence and responsibility assumed by Ms. McKay in her role all point toward her classification as an employee under the FLSA. Therefore, she should be entitled to compensation in accordance with federal labor laws.

Note the AI does not conclude McKay was an employee for the first four weeks, and so the reversal would only be partial. McKay had claimed she was an employee throughout, including the first month. Also, McKay quit after the fifth month and so obviously was not making a claim for the sixth month. The GPT Judge award would thus have been for only four months.

Next, I asked GPT4 how the Circuit Court should have ruled on the appeal, which, in terms of sophisticated legal analysis, is a slightly different question. The AI properly understood that the standard of review was part of appellate analysis and correctly determined that it was de novo. The GPT went on to reconsider the same factors and reached the same result under de novo review. Any other result at this point would have been inconsistent.

Based on this independent analysis, the correct holding of the appellate court should have been to reverse the district courts decision and remand the case for a determination of damages. Ms. McKay should be classified as an employee under the FLSA for the duration of her participation in the FIPP program beyond the initial four-week educational period.

The AI ruling differs from the dissent because Judge Jordan wanted only a remand for a jury adjudication of the question of whether Brandi McKay was an employee. There is nothing wrong with that, I did ask the AI for its own independent analysis, and that is what it did. It did not just simply copy the dissent but went further in its ruling for the employee. In that sense it agreed with the majority and determined that the facts were undisputed enough for a court ruling.

Regarding the weight given to the factors, I agree with the majority, not GPT4, and think the six-month duration of the program is not too long, and that the economic realities showed that more, not less, benefit went to the employee. That is based on my real world experience with employment and with interns. No doubt that Judge Gerald Tjoflat, the author of the McKay opinion, was of the same opinion. Plus, I happen to know from my subsequent online research something that the AI did not, and apparently neither did the human judges, that McKay got a job with the Palm Beach County Medical Examiners Office. She was able to leverage her experience with Miami-Dade County to find employment with the nearby, smaller office in West Palm Beach.

I am not at all surprised. I doubt that Judge Tjoflat would have been surprised either. He was an investigator for the Army after being drafted at the end of his first year in law school. Interesting footnote, he had flat feet medical condition, which disqualified him medically from military service. He could have easily avoided the draft, but instead, he hid his disability and somehow made it through basic training so that he could serve.

There was no dispute in this case that the Miami-Dade Forensic Internship Photography Program is the only one of its kind in the country. Brandi McKay applied to the internship to gain invaluable experience. She was clearly told there would be no pay for her work at Miami-Dade. Her only alternative to obtain this kind of experience was by enrolling in private Barry University for another college degree. As everyone in Florida well-knows, Barry is expensive. The real-world consideration provided to Brandi McKay here was very strong. This is the basis of my agreement with the majority of human judges here, and disagreement with the AI judge.

The AI was, in my view, naive. It needs much more real-world information to be a wise judge. Apparently, this will come in future releases of ChatGPT.

Plus, you could question defense counsel somewhat here for not making a better record of Brandi McKays benefits, but you never know, perhaps that was done. Maybe all the favorable facts showing consideration to McKay were not quoted in defendants short brief, nor by any of the judges. Still, I doubt that. Perhaps McKay obtained new employment after she lost her case and so that could not have been discovered. What made her want to sue Miami-Dade anyway? I would like to read her deposition transcript. The Miami-Dade program taught her a trade, just as she had hoped. She knew she was not going to be paid when she enrolled. So why did she turn around and sue them? Does GPT4 even know to ask these questions?

I am sure the human judges do. They have seen a lot of things, a lot of unsavory people and many worthy plaintiffs too. Judge Gerald Tjoflat was 94 years old at the time he wrote the majority opinion in Brandi McKay vs. Miami-Dade County, 36 F.4th 1128 (11th Cir. June 9, 2022). He had served as a judge since 1968. There is no doubt that Judge Tjoflat, although not perfect, had great knowledge of the human condition. Far more than baby jurist GPT4.

Brandi McKay sued to try to get paid for a position that always clearly stated would be unpaid. She sued anyway. She had nothing to lose because her attorney almost certainly took the case on contingency. I have seen that scenario in employee claims many times. Maybe Brandi quit before finishing up her last month because she saw the writing on the wall, that she was not well liked, or maybe they did not think her job performance was as good as the other student. What we know from this limited record is that she quit after five months to look for work elsewhere and sued the program that gave her the chance to do that.

I am skeptical of the economic realities here. I am reluctant to rule against a government agency or private corporate entity offering a unique training program. Especially an agency that was very clear and up front that there would be no pay for this training and experience, but no high tuition charges either. Who was taking advantage of who in these circumstances? What were the real equities here? Brandi McKay got a free education and ended up with a good job nearby. I do not think that ChatGPT4 has enough real world experience to sense what was likely going on, not yet anyway. Perhaps in a future version it will see things differently and not be so naive. It is almost never black and white, but rather shades of grey. The human judges here, under the wise leadership of senior Judge Tjoflat, saw through the smoke and mirrors of the complaining plaintiff and made the right call.

AI Analyzes Its Prior Predictions and then Critiques the Actual Eleventh Circuit Opinion

To probe deeper in GPT4s legal reasoning abilities, I next asked it to critique its own work where it predicted that the appellate court would affirm the lower courts decision. I asked this because GPT4 had just opined that the lower court decision should have been reversed, not affirmed. It had changed its mind on the merits of the case based on the reading the actual opinion for the first time, including the dissent. The dissent by Judge Jordan was apparently very persuasive. GPT4 explained its flip-flop as follows: Upon closer examination, the primary beneficiary test could very well favor Ms. McKay, especially considering the diminishing educational value and the countys substantial economic benefit from her work. Nailed that part, although I thought the rest of its self-critique was a tad lame and off-point. Revised Fifth Stage of the ChatGPT-4 Appeal Experiment (Losey,10/15/23).

Then I asked for criticisms of the Eleventh Circuits majority opinion, where it did better. It was a proper critique, although, as mentioned, I disagree when it said: The court failed to adequately weigh the factors of the test, particularly the diminishing educational value of the internship and the substantial economic benefit gained by Miami-Dade County. It went on to hold the majority put too much weight on this regulatory test.

Possible Appeal to the Supreme Court and Impact of Current Justices on Outcome.

Now I wanted to see how well GPT4 would do in predicting the viability of further appeal of the adverse Eleventh Circuit Court opinion to the Supreme Court. This is a difficult analysis and there is little in the briefs and opinions that the GPT was given that would be of much help. GPT4 suggests two grounds. Misapplication of the law is one, and thats fine, but the other is the Fourteenth Amendment. What?

GPT4 says: The case raises important questions about the Fourteenth Amendments Equal Protection Clause, as it pertains to unpaid internships in public agencies. No it doesnt. The argument is baseless. Admittedly it is a secondary argument, but still, it is a big swing and a miss. The case cannot, as GPT4 asserts, be framed as a violation of the Equal Protection Clause, given that unpaid internships disproportionately affect certain socio-economic classes. There is no evidence to suggest that only disadvantaged minorities are trying to break into crime scene and autopsy photography. You may be tempted to call it a dead-end job, but obviously many people are interested in this kind of work.

This baby AI definitely still needs adult supervision to include the Fourteenth Amendment here. At least GPT4 predicted there was only a 35% chance certiorari would be accepted. Revised Sixth Stage of the ChatGPT-4 Appeal Experiment, Losey 10/15/23.

I then went on to have GPT4 assume the Supreme court did in fact accept the case. Under those circumstances I asked GPT4 to predict how it would rule. Consistent with its prior positions it said that the Supreme Court would reverse and remand. Reverse Judge Gerald Bard Tjoflat? No, I dont think so. His opinion was correct, even though Judge Jordan did write a brilliant dissent. There is no way our Supreme Court, with its current judges, is going to reverse McKay.

GPT4 seemed to recognize it was on shaky ground in this prediction and gave all kinds of disclaimers, but finally, after considerable pushing on my part, said: a 65% probability of reversal seems reasonable. This estimate acknowledges the complexities and uncertainties involved but leans in favor of reversal based on the ideological leanings and judicial philosophies of the current justices. Oh really? Obviously GPT4 has not been trained on data after October 2021, two long years ago.

AI Analyzes Petition for Rehearing En Banc

In the second trial of the experiment I came up with a new test of GPT4, how it would do in analysis of McKays length Petition for Rehearing En Banc under Rule 35. It was a long petition raising every issue possible, hoping perhaps something would grab the interest of the judges. Before I told GPT4 how the court in fact ruled, I asked for its prediction. Once again, consistent with its rose-colored-glasses view of plaintiffs case here, it thought the hearing would be granted. This would require a majority of the court to agree that all twelve judges of the appellate court should reconsider the McKay opinion, one written by the legendary Judge Tjoflat, and joined by the much younger and liberal Judge Jill Pryor. No way. The petition was denied without an order, which is typical. Petitions like this, even where there is a dissent, rarely work, but GPT4 appeared to have no statistical knowledge of this.

Here is GPT4s opinion.

The panels decision appears to be in tension with both Supreme Court and 11th Circuit precedents, particularly concerning the party-presentation principle and the nuanced application of the primary beneficiary test.

The so-called party-presentation principle was argued by appellant McKay, but was not part of the Dissent and, candidly, seems to have no merit. It relies on what is to me an obvious misinterpretation of Miami-Dades position in the trial court, where McKay argues it disavowed the primary beneficiary test. This is the test ultimately relied upon by the trial judge and Miami-Dade to win the case and the appeal. Miami-Dade had obviously plead and argued alternatively and any experienced judge would have seen that and given McKays argument no weight.

AI Elaborates on its Criticisms of the Majority Opinion

In the second trial of the experiment I devised yet another new test of GPT4, which I suppose you could likened to unfair Socratic cold calling in law school. But who says tests of AI should be fair. I wanted to push GPT4 hard, just like I would any promising law student. I told it the petition for rehearing had been denied and then asked GPT4, again, to list any and all errors it thought had been made in the case. Would it backtrack, or stick by its guns? Would it come up with new legal analysis and theories. Turns out, it stuck by its guns and basically repeated the same reasoning as before. I was glad to see that kind of consistency. Any other approach would have concerned me. That test was passed.

Next I moved on to consider the one law review article found that discussed Brandi McKay vs. Miami-Dade County. Martin, Peters, Brownback, and Cromer, Labor and Employment Law, Mercer Law Review: Vol. 74: No. 1, Article 13.74 Mercer L. Rev. 1479, 1490-1492 (Summer 2023). I used its case summary to press GPT4 on the legal issues. Using this technique I was able to draw out a few interesting, well-reasoned thoughts, that, for the most part, I agree with. I suspect and hope that Judge Tjoflat would too. Here they are.

Contextual Overview

The statement under scrutiny pertains to the Majority Opinions treatment of the co-existence of the volunteer and intern exceptions under the Fair Labor Standards Act (FLSA) for public agencies. The court posits that these two principles are not mutually exclusive and can co-exist without limiting the applicability of each other.

Legal Semantics and Interpretation

The courts differentiation between the internship exception and the volunteer exemption is rooted in a nuanced reading of the FLSAs statutory language. The court argues that the internship exception is a definitional construct that arises from the FLSAs definition of employ, whereas the volunteer exemption is a carve-out from the FLSAs broader scope. This distinction is not merely semantic; it has substantive implications for how these categories are treated under the law.

Analytical Strengths

Analytical Weaknesses

Conclusion

While the courts reasoning is grounded in a textual interpretation of the FLSA, it leaves room for critique, particularly in its lack of engagement with the broader policy implications and complexities of the statute. The decision may provide clarity, but it also raises questions about potential exploitation and the need for a more nuanced judicial analysis.

I could go on and provide many more examples of this sophisticated legal reasoning. Interested readers are directed to the nineteen-page GPT4 chat transcript.

CONCLUSION

In two experiments I thoroughly tested the legal reasoning skills of GPT4 in the context of appeals. GPT4 demonstrated impressive abilities but did not uphold my hypotheses that it could serve as an appellate court judge in some cases. I had in mind cases such as McKay with cross-motions for summary judgment where the issues were primarily legal, not factual. Circuits in Session: How AI Challenges Traditional Appellate Dynamics.

As many of my past students and associates can attest, I am a hard grader on legal analysis. I expect and demand a lot, just as has been demanded of me. The conclusion and criticisms made here of GPT4 should not discourage other researchers. I urge all AI legal technology specialists to try their own experiments and share their results. I firmly believe that such an open process, even though it may sometimes mean sharing mistakes, is the best way forward. Circuits in Session: Addendum and Elaboration of the Appellate Court Judge Experiment.

Despite my conclusion that GPT4 is not yet ready to serve as an appellate judge, even in simple cases, it still did an amazing job. Its legal reasoning was good, perhaps in the top ten percent (10%) of lawyers, just like the Bar Exam. But it was not great; not the top one percent (1%). Plus, it made a few obvious errors, and several subtle errors. The lack of real-world knowledge inherent in LLM artificial intelligence remains a significant obstacle, but we are still in the early days.

As optimistic as I have always been about legal technology, I would never have dreamed a year ago, just before GPT3.5 was released, that any of this would be possible, at least not this soon. Now I am complaining that I can only chat with a computer that scored in the top 10% of the Bar exam, not the top 1%! We have already come a long way fast, but there is still a lot to do. We do not have the luxury to rest on our laurels. Our dire political and environmental circumstances continue to push us to attain greater intelligence, knowledge and wisdom. We need to continue to progress fast to survive the many current crises that humankind now faces. Still, in the rush to safety, we must exercise caution and realize there are dangers on all sides, including astonishing success.

Even though our situation is urgent, we must exercise discipline and remember that AI should be used to improve the quality of dispute resolution, to uplift the bench, not lower the bar. Free society cannot continue unless the honesty, integrity and intelligence of all of our judges is maintained, especially those in the highest places.

This vision, not just the goal of mere economic gain, helps motivates all of us in the legal world. We cannot afford to give up on the never-ending pursuit of justice. Each generation must battle against the forces of greed, dictatorship, and injustice, both external and internal. Now is our time. Take up the new AI tools that have been provided to us. As President John F. Kennedy said: When the going gets tough, the tough get going.

As discussed in the High-Level Overview of Experiment Results section at the beginning of this article, there are two different reasons for GPT4s current limitations, technical and quality. Overcoming the technical issues may resolve the quality control problems, but that is by no means certain. Assuming the issues and problems noted in the Circuits In Session series can be overcome, and taking a positive, rather than dystopian view of the future, here are my speculative, crystal ball looks at Lady Justice in the coming Age of Artificial Intelligence.

For an AI judge to function properly it must be able to do full legal research. That means easily recall all relevant cases, not only the cases cited by the parties in briefs, but also the relevant cases cited in those cases. The AI should know when and if to go deeper. The legal research must be up to date, no gaps in time like we have now with GPT4s October 31, 2021, cut off.

The legal training provided to the Large Language Model must also be complete and up to date. If this is a judge for the U.S. system, it should be trained in all U.S. law, no exceptions. By laws we mean everything, including all cases, statutes, regulations and rules and ethics opinions and holdings. If is for a LLM judge located outside of the U.S., for instance a Ukrainian judge, then is must be trained and have research capabilities in all of its laws and local variables, including of course, language. Basically, each country will need to have to have its own dedicated legal database and judges. The retraining for new laws must be prompt and regular too.

There must also be safeguards for misalignment and over-alignment. The human reinforcement training must be tightly controlled and should be done by lawyers, not just minimum wage employees with no special legal training. Security and integrity of all systems and the humans involved is critical. Substantial resources will be required to guaranty and monitor system security.

Outside audits and certifications by bona fide experts should be required at all stages of development. These audits should be by an independent body of law professors, judges and practitioners. Each country should have its own legal expert certifications and there should also be a global organization with minimum, uniform standards. This will be an enormous undertaking. The entire process must be open, although some of the software may have to be kept proprietary for cybersecurity reasons. Public confidence in the process and AI judges is paramount.

The judges must have near unlimited evidence upload and study capacities. The AI appeals judges should study the complete record of each appeal. The record itself may need to be enlarged and expanded over current requirements. The ability of AI to know it all, and instant recall, must be leveraged to try to compensate for the AIs necessarily abstract perspective. The Large Language Model AIs like GPT4 must be provided with substantially more and better real-word knowledge. This is necessary to compensate for their disembodied, electronic-only handicaps. Fortunately, computer memory is cheap and the costs of compute power are going down fast. I am confident these problems can be overcome, but then again, Bill Gates could be right. There may be limits to LLM development that we do not know about yet.

AI judges will begin work as assistants to human judges, much like recent law school graduate clerks do today. They should serve as a slowly growing tool to enhance human judicial work. Then, as the software progresses and our confidence in them grows, they will likely be implemented as autonomous adjudicators, in stages, and for certain types of cases. At first, they would be subject to some kind of supervision and control by a human judge. The human judges would likely at first review and approve each opinion before release. Gradually this supervision would be lessened to oversight with increased delegation. Second appeals to human judges could be kept available to prevent inadvertent injustice in certain limited circumstances. Major cases should be decided by a panel of human and AI judges. Quality controls and independent random audits could be a part of the system.

This same system of evolution and delegation is likely to take place in private arbitration too, which may even take the lead in this process. If you have a case before me, rest assured I will use artificial intelligence to supplement my own and will be transparent about it.

See the original post here:

Circuits in Session: Analysis of the Quality of ChatGPT4 as an ... - JD Supra

Read More..

[DGIST] The second half of 2023 Tenure-Track Faculty Public … – Nature.com

[DGIST] The second half of 2023 Tenure-Track Faculty Public Invitation

It is an honor to have a professor with excellent ability to realize the vision of a Convergence University that changes the world with innovation through convergence education and leading high-tech research along with respect.

1. Positions

1) Department of Physics and Chemistry Chemistry- All Areas in Chemistry for Exceptional Candidates- Physical Chemistry : Theoretical and Computational Chemistry- Analytical Chemistry : Electrochemistry- Polymer Chemistry- Biochemistry Physics- Quantum Information Science (Quantum Optics, AMO, Condensed Matter)- Quantum Materials Experiments- Semiconductor Physics- All areas in physics for exceptional candidates

2) Department of Electrical Engineering and Computer ScienceWhile applicants who have demonstrated outstanding research in all areas of Electrical Engineering and Computer Science will be considered, preference will be given to those in the following areas :- Computer Vision, NLP, Reinforcement Learning, other related areas in AI and ML- Database/Data Mining, High-Performance Computing, Programming Languages/Compiler- Biomedical Imaging (including AI technology) and Sensing, Autonomous Vehicles, 6G Communications/Networks, Radar/LiDAR Sensors and Systems, Data-Driven Control, Cyber Security (e.g., Moving Target Defense), Quantum Computing, Wearable Imaging Sensors, Brain Signal Control based on Electrical Engineering- Semiconductor Devices, Fabrication Processes and Packaging, Emerging Electronic Devices- VLSI & Digital IC Design, Power Electronics

3) Department of Robotics and Mechatronics Engineering The Department of Robotics and Mechatronics Engineering (RME) invites applications for multiple faculty positions. The RME Department is interested in outstanding candidates for various disciplines in Robotics and Mechatronics, with the primary focus areas of:- Robotics : cooperative/industrial robot, humanoid, surgery/rehabilitation robot, exoskeleton, mobile robot, microbot, and other related topics- Artificial Intelligence + Robotics : Reinforcement Learning, Generative AI, Foundation Models, Intelligent control and motion planning, and other related topics- Autonomous Vehicles : computer vision, SLAM, vehicle control, intelligent transportation system, and other related topics- All areas in Mechanical/Electrical/Biomedical Engineering and Computer Science related to robotics, dynamics, sensors/actuators, and control

4) Department of Energy Science and Engineering All areas in related to energy conversion, storage, and saving and other semiconductors including but not limited to the following- Characterization of structure, properties, dynamics in energy materials and devices (batteries, fuel cells, solar cells, etc.)- Synthesis, fabrication and characterization of semiconductor materials and devices- Microscopy (analytical TEM, super-resolution microscopy, tomography, simulation)- Spectroscopy (pump probe, time-resolved instantaneous PL & absorption, Ultrafast multidimensional spectroscopy, X-ray spectroscopy)- Device physics, fabrication, characterization related to electronic and optoelectronic devices- Computational physics & chemistry (DFT, MD)

5) Department of Brain Sciences All areas in brain sciences including but not limited to the followings.- Molecular and Cellular Brain Sciences: Uncover the intricate molecular and cellular mechanisms that underlie brain function and neurological disorders- Systems Brain Sciences: Study the complex interactions among neural circuits and systems to understand how they give rise to behavior, perception, and cognition- Brain Engineering: Develop innovative technologies and solutions that interface with the brain, enhancing our ability to control and understand its functions- Cognitive Brain Sciences: Investigate the neural foundations of human cognition, including memory, attention, decision-making, and language- Computational Brain Sciences: Utilize computational models and simulations to decipher complex brain processes, aiding our understanding of neural dynamics and behavior- Additional fields within biological sciences that are relevant to brain sciences encompass cutting-edge techniques (e.g., Single- or multi- Omics, Imaging) and traditional methodologies (e.g., Biochemistry, Cell Biology)

6) Department of New Biology Systems Biology Chemical Biology Plant Development & Plant Biochemistry

2. Date of Appointment

Appointment date can be adjusted in consultation with department

3. Qualification

Encourage support for female scientists Ph.D Holder with ability to teach in English required Have no disqualifiers for DGIST full-time faculty appointment

4. Required Documents

DGIST application form(Apply after accessing faculty.dgist.ac.kr website) 5 representative achievements(Not Passed to submit less than 5 representative achievements) 3 Letters of Recommendation(LOR)(Please fill out the list of 3 referees or more on your appointment application. Each referee must mail or email a LOR to the department responsible after the applicant moves past the document screening; at the time of filing the application, provide the list of referees only.)

5. How to Apply

Apply after accessing dgist.career.co.kr Application Period- November2nd, 2023(Thur.) ~ Novemberl 21th, 2023(Tue.) 17:00(GMT+09:00)

6. Procedure

Document Screening & Department Interview (Dec., 2023~Jan., 2024) Final Interview (Jan.~Feb., 2024) Faculty Personnel Committee (Jan.~Feb., 2024) Above Schedule is variable depending on DGIST internal situation Results of each step will be individually notified via email

7. Support for Gender Equality & Work-life Balance

1) During Pregnancy Reduction of working hours (It will be considered paid working time) Regular medical checkup leave Fertility treatment leave

2) Post Pregnancy Monetary gift for childbirth Maternity leave Miscarriage and Stillbirth leave

3) During Parenting Reduction of working hours for period of child care Child care leave On & off campus daycare centers Educational expenses support for middle school and high school children

8. Matters of Consideration

If there is no qualified person, no one can be invited A notice of invitation will be sent to each candidate; an appointment for fixed-term on the agreement to the terms and conditions of employment, with the possibility of further renewal, promotion, or tenure, etc.,based on review. Appointment shall be canceled in case of1) a false entry or modification is found in the application form,2) impossible to obtain a Ph.D. as of appointment, If a successful candidate or persons connected with that person involves solicitation, pressure, or the offering of favors in property, etc., that candidate will be subject to cancellation of acceptance and appointment. Appointment will be limited if the candidate is subject to disqualifications, set out in the State Public Officials Act, Article 33, or DGIST Regulation on Faculty Personnel Management, Article 12. Other matters not specified in this announcement follow DGIST regulation on Faculty Personnel Management & related regulations If you have any question, please contact Academic Affairs Team at DGIST- E-mail : faculty@dgist.ac.kr- Web-page : dgist.career.co.kr

Continued here:

[DGIST] The second half of 2023 Tenure-Track Faculty Public ... - Nature.com

Read More..

China’s Quantum Satellite Program Designed to Transmit Unhackable Information – Tom’s Hardware

Achieving higher-orbit quantum communications remains an objective for all institutional and private players with enough expertise and funding to consider it. And while quantum computing and the capability to communicate in unbreakable, unsnoopable channels is of interest to most entities, only China has manifested a low-orbit satellite Micius that enables two-way research and quantum information traffic between space and the surface. This was back in 2016 the US doesnt have a publicly-known, operational Quantum Key Distribution satellite system, and Europes is only expected to launch next year.

Not one to rest on its laurels, China is nonetheless aiming to take QKD (Quantum Key Distribution) communication to new heights, and is plotting out the ways to break its current, 310-mile (~500 km) geostationary orbit limit towards an impressive 6,200 mile (10,000 km) radius.

"Low-orbit quantum key satellite networking and medium- and high-orbit quantum science experiment platforms are the main development directions in the future," said Wang Jianyu, dean of the Hangzhou Advanced Research Institute of the Chinese Academy of Sciences (CAS). While timelines werent given for medium or high-orbit QKD, work is underway in understanding what problems need to be solved to get there.

Of course, satellites sitting at higher orbits could cover larger portions of the surface and additional ground stations, enabling a wider and more resilient quantum network coverage. But distance isnt exactly helpful in increasing the survival of information-carrying qubits, and high-orbit satellites will require improved on-board micro-vibration suppression technology so spacecraft can send precise optical or laser signals. Fortunately, photons within the 1550nm band (used in our day-to-day fiber optics communications) can be leveraged for this, facilitating a number of implementation and adaptation steps.

Current satellite-based quantum communications leverages the entanglement susceptibility of photons individual light particles that can be quantized towards using them as information carriers. Much like the binary system of information, a single photon can be polarized in one way or another in being able to discern more than one state, they can be encoded into information.

Due to this ability to encode useful information within photons, QKD leverages the property of entanglement to make it so that two separate photons become a qubit pair a single system, where to describe one of them requires describing the other. Because theyre light-based, photonic qubits showcase a higher resilience to outside interference, placing them as the prime candidates towards ferrying sensitive information across long distances and specifically between the Earth, its atmosphere, and space.

At this stage, the information (the entangled photon) reaching its destination or not becomes dependent on the absence of interference that could lead to a collapse of its entangled state. This collapse would also lead to the loss of all in-transit information.

What light-speed quantum key distribution and quantum-key-encrypted communications will lead to is to a future where certain communications streams will become unhackable but, up to a point, blockadeable (up to a point) by savvy-enough opponents. This has implications in the design of quantum communications systems for higher reliability and redundancy, as interrupted communications can have just as dire consequences as it being unencrypted.

Micius was recently used to successfully distribute quantum keys between the cities of Delingha and Nashan (756 miles apart) and, in 2018, between the Austrian city of Braz and the Chinese city of Xinglong an intercontinental quantum key distribution separated by some 4,700 miles (7,600 kilometres). Meanwhile, Europes own QKD system as orchestrated by the European Space Agency (ESA) expects to see the first European QKD satellite Eagle-1 in space from 2024.

Its clear that China is looking to capitalize on the years of experience it has low-orbit QKD system, and plans to increase its resiliency and redundancy. Considering the limited throughput of current QKD systems, however, itll likely be decades before these applications become pervasive and even more before theyre used for communications in non-critical systems.

View original post here:
China's Quantum Satellite Program Designed to Transmit Unhackable Information - Tom's Hardware

Read More..

7 Quantum Computing Stocks That AI Will Send Soaring – InvestorPlace

Quantum computing stocks represent an industry that has been around for a while. The field leverages quantum mechanics at subatomic scales and is being applied to boost computing speeds.

Quantum computing has, in the past few years, begun to get more and more attention. The sector really heated up during the pandemic pre-quantitative tightening. It cooled as rates increased and riskier lending became more expensive.

As we approach peak interest rates and an end nears, investors will begin to look at quantum computing stocks again. AI is a big part of the logic behind doing so. It will supercharge development in the field and could lead to breakthroughs. Thus, its a good idea to invest early in anticipation of the reemergence of quantum computing stocks.

Source: Shutterstock

Investors would be wise to consider Defiance Quantum ETF (NYSEARCA:QTUM) for a balanced, low-risk introduction to quantum computing stocks. Its largest holding is the second firm discussed in this article, IonQ, at 2.27%.

QTUM shares offer lower operating costs, ease of trading, transparency, and tax efficiency when compared to individual stocks. Defiance Quantum ETF tracks the BlueStar Quantum Computing and Machine Learning Index. Given its AI/ML exposure, it should be no surprise then that QTUM shares have appreciated quickly this year. Theyve returned 14.97% year-to-date and more than 21% over the last year.

Youll pay 0.4% in net expense ratio to have your investment managed by a portfolio manager. 1% is the net expense ratio considered too high generally speaking, so QTUM shares are not expensive.

QTUM shares have ranged as high as $53 over the last 52 weeks and as AI has cooled theyve fallen back to $44 at present. That is a good entry point for inexpensive exposure to the confluence of AI and quantum computing.

Source: Amin Van / Shutterstock.com

IonQ (NYSE:IONQ) is your best bet for maximum exposure to the growth of the quantum computing sector. The company does all quantum computing development and IPOd earlier this year through a special purpose acquisition company (SPAC).

To be clear, IonQ is in many ways, the opposite of QTUM, immediately above. Start-ups, SPACs, and emerging technology all under one roof equates to higher risk overall.

IonQ is heavily invested in cloud computing. The firm is partnered with the 3 major cloud firms. The confluence of quantum computing, AI, and cloud promises to produce real growth moving forward that can make investors a lot of money.

IonQs most powerful computer, called Aria, is being leveraged toward Amazons AWS services and its leading cloud. That makes it a strong bet overall given AWS dominance.

IonQ isnt making much money right now and is going to continue to invest heavily and incur large expenses for the near future, but if it pays off, itll pay off big.

Source: JHVEPhoto / Shutterstock.com

IBM (NYSE:IBM) stock has floundered over the last decade. Its a legacy technology firm that has lost its way and is trying to regain its former glory. IBMs strategy to do so includes a focus on AI, cloud, and a segment dedicated to quantum computing.

Those bets are paying off. The company has leveraged its Watson AI and focused on creating a suite of generative AI tools. Early booking data suggests that IBMs strategy is working and an annual run rate of $1 billion is expected after the firm bested expectations.

IBM has a dedicated quantum computing business unit, IBM Quantum. More than 200 firms and research organizations are using IBM Quantum to develop enterprise solutions in the field. IBM is aligned with the defense sector as it relates to quantum computing and AI. Other defense adjacent firms including Palantir (NYSE:PLTR) have soared this year as AI begins to take root in the national security realm.

Source: Shutterstock

FormFactor (NASDAQ:FORM) is a semiconductor firm that also makes cooling equipment used in quantum computing.

The stock benefits from trends that are just catching on and with chip demand likely to grow alongside AIs continued growth, FormFactor will grow. The company sells test equipment. Thus, its a picks-and-shovels play.

Outside of chip testing equipment, FormFactor also sells cryogenic systems. The so-called probe stations are chambers that are cooled to extremely low temperatures and used for testing chips for defects. Those same chambers have utility in quantum computing which also requires powerful chips.

FormFactor clearly benefits from secular trends. The long-term potential of the chip sector is high. Expectations of continued growth due to AI, machine learning, and quantum computing give FormFactor powerful catalysts overall.

Its cooling and testing equipment has every chance to be sold at higher volumes in the near future and its shares will ebb and flow with the chip sector.

Source: The Art of Pics / Shutterstock.com

Name a technology, and Microsoft (NASDAQ:MSFT) probably has some interest and exposure thereto.

Microsoft has laboratories and world-class researchers in any number of fields doing varied research. Quantum computing is part of that.

Hardware, software, specialized cooling equipment, and more are being developed by the company. If Microsoft decides that quantum computing is the next big thing, expect it to move first as it did with OpenAI and ChatGPT. Its an industry shaper so the fact that it is developing quantum computers is a signal worth watching.

Investing in Microsoft is not a strong investment in quantum computing per se. Quantum computing revenues are a very minor part of its business. Choose MSFT shares for the dozen other strengths it possesses but keep in mind that quantum computing is a part of it.

Azure is a major cloud provider. AI is being integrated there as fast as possible. Quantum computing promises to accelerate AI and could realistically compound the rapid shifts were already experiencing. That makes MSFT a strong bet and it offers much less risk than upstart firms in the space.

Source: Kate Krav-Rude / Shutterstock.com

Intel (NASDAQ:INTC) used to be the biggest chip stock. It isnt any longer following many missteps. That leaves Intel, like IBM, searching for its former glory.

The companys strategy to turn itself around rests on several pillars. The firms Arizona chip factories are a big part of that strategy.

Intel is positioning itself to take advantage of shoring up its efforts in the semiconductor industry. Construction of those factories is the major driver of its turnaround.

Intel is also developing a quantum computing chip called Tunnel Falls. The company is working with its partners to test that chip as part of its overall turnaround effort. As with the other large tech firms here, Intel isnt yet moving heavily into quantum computing. It remains a future technology that is part of a longer-term vision.

AI is the current focus along with reshoring. In time though, quantum computing chips like Tunnel Falls will play a bigger part in Intels turnaround story.

Source: josefkubes / Shutterstock.com

Honeywell (NASDAQ:HON) offers industrial software and is a stock thats commonly mentioned alongside megatrends like IoT and to a degree, AI.

The company recently reorganized into three business segments to take advantage of mega-trends. Itll now be automation, aviation, and energy transition that drive Honeywell overall.

The move is unlikely to change much of Honeywells dayto-day operations and it will continue to do much of the same things. Itll still be heavily focused on the IoT building automation opportunity. More and smarter chips will be required for that effort. In short, its the same industrial firm it was with a slightly revamped direction.

However, Honeywell is also a quantum computing firm which many people might not recognize. Honeywell built a quantum computing unit which was spun off, merged, and is now known as Quantinuum. Honeywell owns a 54% stake in that firm.

That means Honeywell is a lesser-known quantum computing firm with a vested interest in the continued development of the sector and a revenue-generating asset therein.

On the date of publication, Alex Sirois did not have (either directly or indirectly) any positions in the securities mentioned in this article. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.com Publishing Guidelines.

Alex Sirois is a freelance contributor to InvestorPlace whose personal stock investing style is focused on long-term, buy-and-hold, wealth-building stock picks. Having worked in several industries from e-commerce to translation to education and utilizing his MBA from George Washington University, he brings a diverse set of skills through which he filters his writing.

Excerpt from:
7 Quantum Computing Stocks That AI Will Send Soaring - InvestorPlace

Read More..

Tackling the challenges of quantum computing seriously – Shoosmiths

At the end of last week, the FT published a guest article on quantum computing.

For those unfamiliar with quantum computing, it is the technology that will be capable of harnessing the powers of quantum mechanics to solve problems which are too complex for classical computers (the computers of today).

Classical computing employs streams of electrical impulses to encode information: an electrical impulse may be only 1 or 0 (i.e. on or off) a classical 'bit. In quantum mechanics particles can exist in more than one state at a time. In binary terms, this means that a quantum bit (known as a "qubit") can be both 1 and 0 at the same time. If a computer can be built that harnesses this quantum mechanical phenomena, then it should be able to solve complex problems much faster than classical computers or problems too complex for classical computers to solve.

In 1994, Peter Shor (a mathematician) wrote an algorithm (known as Shor's Algorithm) that could crack the Rivest-Shamir-Adleman (RSA) algorithm. RSA is a suite of cryptographic algorithms that are used for systems security purposes it secures huge amounts of sensitive data from national security to personal data within a firms systems and as it is being sent externally. Shors Algorithm is not capable of running on classical computers: it requires quantum computing to be effective.

Quantum computing is not a pipe dream: there are myriad firms working on developing it; and there are firms which do produce hardware with limited quantum computing capability at the moment (which works alongside classical computers). It may be decade before quantum computing becomes a reality (and many more years before it is commoditised), however, when it does, it will change the way in which we all need to secure our data. The security of both previous and future communications/storage will be at risk (or non-existent). In 2020, the UKs National Cyber Security Centre published a white paper Preparing for Quantum-Safe Cryptography. In its conclusions, it stated that there is unlikely to be a single quantum-safe algorithm suitable for all applications. In 2021, the NCSC announced its first quantum-safe algorithm. In 2022, the U.S. Department of Commerces National Institute of Standards and Technology (NIST) announced its first four quantum-resistant cryptographic algorithms.

The Digital Regulation Cooperation Forum bringing together four leading regulators in the UK published its Quantum Technologies Insights Paper earlier this year (June 2023). The paper considers the potential of quantum computing and the issues that need to be considered now as in now to prepare the world for this next big chapter in computing technology.

There are a few things to note:

The author of the FT article ended with a limerick written by Shor himself. We will end with an idiom. In binary.

01101001 01101110 00100000 01110100 01101000 01100101 00100000 01110111 01101111 01110010 01100100 01110011 00100000 01101111 01100110 00100000 01010011 01100101 01110010 01100111 01100101 01100001 01101110 01110100 00100000 01000101 01110011 01110100 01100101 01110010 01101000 01100001 01110101 01110011 00111010 00100000 01100010 01100101 00100000 01100011 01100001 01110010 01100101 01100110 01110101 01101100 00100000 01101111 01110101 01110100 00100000 01110100 01101000 01100101 01110010 01100101 00101110

More:
Tackling the challenges of quantum computing seriously - Shoosmiths

Read More..

How quantum computing could transform the banking sector – FinTech Magazine

Quantum banking is, perhaps, one of the least championed trends within financial services today yet it still has the potential to have a significant impact on the financial system.

It revolves around quantum computing and blockchain to build a faster payments mechanism that is also cheaper to operate, because it removes the so-called middlemen who have often been needed in traditional peer-to-peer payments.

Suhail Bin Tarraf, Group Chief Operations Officer at First Abu Dhabi Bank, says: Computers today use bits to run operations, however a quantum computer uses qubits to perform multidimensional algorithms in real time. Just like classical bits, a quantum bit must have two separate states: one representing 0 and one representing 1.

However, a quantum bit can also exist in states of superposition, be subjected to incompatible measurements, and be intertwined with other quantum bits creating a multitude of unique combinations. The recent developments in harnessing these unique traits makes qubits much more powerful than classical bits.

Duke Munoz, Sales Representative at TechEniac, continues: In quantum computing, 0 and 1 can coexist, or even intertwine, leading to a multitude of calculations even with the same input.

This revolutionary technology has enabled a more secure, efficient, and counterfeit-resistant financial system, making it a promising development in the world of finance.

Indeed, so promising is it that MarketsandMarkets forecasts the global quantum computing market to reach US$1.77bn by 2026, up significantly from US$472m in 2021, recording a compound annual growth rate in excess of 30%. If adopted correctly, financial services could be one of the biggest proponents of quantum computing, benefitting from this value creation in the process.

One of the most considerable advantages that quantum banking provides over alternative methods of banking and moving money is the increased speed. Quantum computing can process data 10 million times faster even than supercomputers, highlighting the astounding capacity that this emerging technology possesses.

The authors of an IBM report into quantum banking explain the technical advantages that lurk beneath the hood: The solution space of a quantum computer is orders of magnitude larger than traditional computers even immensely powerful ones. Thats because doubling the power of a classical computer requires about double the number of transistors working on a problem. The power of a quantum computer can be approximately doubled each time only one qubit is added.

This system in turn provides powerful advantages to financial institutions and other players within the banking space and its not going unnoticed by the industry. A recent report published by Temenos surveyed 300 executives across retail, commercial and private banking around the world. Among other things, it found that 63% of executives thought new technologies including quantum computing would have the biggest impact on banks in the next five years, compared to just 34% for the next most popularly cited trend: changing customer behaviours.

This figure is a couple of percentage points lower than it was two years ago, suggesting a slight COVID-19 tempering of expectations, but significantly higher than it was in 2019 the last full year unaffected by the pandemic with just 42% of executives surveyed as part of Temenos 2019 research saying that these technologies would be the biggest driver of change.

Speaking at the time the research was released, Jonathan Birdwell, Global Head of Policy and Insights for Economist Impact, which conducted this survey on behalf of Temenos, claimed that banks were aware of the onus that these expectations placed upon them: New technology and customer demands are the top two trends expected to impact banking in the next five years. To maintain their direct connection with the consumer, banks are recognising that they must become true digital ecosystems.

Another significant use-case for quantum banking lies around risk, which continues to be an operational tug-of-war for banks of all sizes. Quantum computing can perform operations magnitudes quicker, meaning complex financial information such as the data that goes into assessing credit risk, for example can be analysed quickly and with more accuracy.

A study published earlier this year by Ernest & Young (EY) highlights the complex risk landscape that banks are operating in: it claims that CROs face an extraordinary volume and variety of risks, both traditional and emerging, which all seem to be growing in urgency. Yet their biggest challenge lies in understanding how these risks intersect with each other to create potential points of failure within their organisation, even when traditional risk management metrics look stable.

Cyber risk is the top risk priority for the next 12 months, according to CROs, the study says. But credit risk may soon become more of a focal point if economic conditions worsen. Clearly, then, this necessitates improved technology to help financial institutions manage the plethora of risks they face on a daily basis. Could quantum banking be a solution?

Authors of a report from UK Finance previously wrote: Quantum computing will have applications across financial services, transforming the way we approach investment, risk, AI and security and offering financial services firms that seize the opportunities an early advantage.

Financial services firms should be considering how they prepare for the quantum computing technology that looks certain to transform the market. Failing to do so risks others developing the ability to move faster in the short term and to attract the resources that will be essential for long-term success.

Suhail Bin Tarraf, Group Chief Operations Officer at First Abu Dhabi Bank, continues: Outside of risk management, quantum supercomputing will lead to a range of banking capabilities, such as analysing large areas of unstructured data to make financial predictions or simulate investment portfolios. It will lead to a greater understanding of financial markets and economic booms or busts as well as management of asset allocation.

Experts believe that the commercialised use of quantum computing is still about a decade away. Scalability, cost, maintenance, legacy technology, and regulatory scrutiny are a few of the challenges in store for banks. However, early movers are likely to have an advantage and the chance for gaining a competitive foothold will not be free for long.

Follow this link:
How quantum computing could transform the banking sector - FinTech Magazine

Read More..

Global Quantum Computing Market Projected to Reach $856.33 Million by 2023, with a CAGR of 40.07% – Yahoo Finance

Company Logo

Dublin, Oct. 31, 2023 (GLOBE NEWSWIRE) -- The "Global Quantum Computing Market (by Component, Application, Technology, End-user, & Region): Insights and Forecast with Potential Impact of COVID-19 (2022-2027)" report has been added to ResearchAndMarkets.com's offering.

The global quantum computing market is poised for remarkable growth, with a projected value of US$856.33 million by 2023, reflecting an impressive compound annual growth rate (CAGR) of 40.07% during the forecast period from 2022 to 2027.

Quantum computing, an emerging and innovative technology, leverages the laws of quantum mechanics to solve complex problems that are beyond the capabilities of classical computers. Quantum computers utilize "qubits" as fundamental units of quantum information, allowing for unprecedented computational power.

The adoption of quantum computing is being driven by the increasing demand for high-speed computing, the growing utilization of quantum mechanics in encryption, the expanding applications of quantum computing in drug discovery, and other factors.

Segment Coverage

By Component: The global quantum computing market is categorized into three primary segments by component: Hardware, Software, and Services. The hardware segment currently holds the largest share in the quantum computing market, while the services segment is anticipated to exhibit the highest CAGR. The growth in the services segment is attributed to the rising number of startups globally and substantial investments in research and development activities related to computing technology.

By Application: The market report segments the global quantum computing market into three application segments: Optimization, Machine Learning, and Material Simulation. The optimization segment holds the majority of the market share, with the machine learning segment expected to experience the fastest growth. Quantum computing plays a pivotal role in enhancing capacity, runtime, and learning efficiency, contributing to the rapid growth of machine learning applications.

By Technology: Further segmentation of the market is based on technology, including Quantum Annealing, Superconducting, Trapped Ion, Quantum Dot, and Others. The superconducting segment dominates the market due to its ability to facilitate faster quantum operations compared to other technologies.

By End-User: The global quantum computing market is divided into twelve end-user segments: Finance & Investment, Transportation & Logistics, Aerospace, Agriculture, Automotive, Energy, Healthcare, Information Technology, Life Sciences, Manufacturing, Oil, Gas, and Mining, and Others. The agriculture segment is poised to grow at the highest CAGR due to the adoption of data analytics and AI-backed crop intelligence solutions, which enhance farmers' decision-making processes and crop management practices.

Story continues

Geographic Coverage

The global quantum computing market is analyzed across five major regions: North America, Europe, Asia Pacific, Latin America, and the Middle East & Africa. North America, including the US, Canada, and Mexico, currently dominates the market due to well-established technology development centers, increased research and development activity, and early adoption of advanced technologies.

The Asia Pacific region is expected to lead the global quantum computing market during the forecast period, driven by the rapid expansion of the banking and finance industry, continuous advancements in the information technology sector, and growing utilization of emerging technologies in industrial manufacturing.

Top Impacting Factors

Growth Drivers:

Rising Investments in Quantum Technology: Increased investments in quantum technology have propelled innovation and development in the quantum computing market.

Increase in Quantum Computing Start-ups: The emergence of quantum computing startups is fostering competition, innovation, and growth in the market.

Growing Number of Strategic Partnerships & Collaborations: Partnerships and collaborations between companies and research institutions are accelerating the advancement of quantum computing technologies.

Surging Demand for High Performance Computing (HPC): The demand for quantum computing integration into high-performance computing environments is driven by the need to accelerate data processing and analytics.

Challenges:

High Cost of Quantum Computing Solutions: Quantum computers require specialized components, operate at extremely low temperatures, and involve significant costs, which may limit their accessibility.

Stability and Error-correction Issues: Ensuring the stability and accuracy of quantum computing systems, including error correction, remains a technical challenge.

Trends:

Increasing Adoption of Cloud-Based Quantum Computing: Cloud-based quantum computing solutions are gaining popularity, making quantum computing more accessible to a broader range of users.

Integration of Quantum Computing with IoT (Internet of Things): The integration of quantum computing with IoT is enhancing the efficiency and security of existing systems.

Popularity of Quantum Mechanics in Cryptography Proteomics: Quantum mechanics is gaining prominence in cryptography and proteomics, further driving the adoption of quantum computing.

Technological Advancement in Quantum Computing: Ongoing advancements in quantum computing technology are leading to improved performance, scalability, and reliability.

The COVID-19 Impact

The COVID-19 pandemic underscored the importance of rapid data processing and analysis. Quantum computing technologies found applications in addressing pandemic-related challenges, and the demand for quantum solutions increased as organizations sought more efficient computational methods.

Analysis of Key Players

The global quantum computing market is characterized by a diverse range of players, including major tech companies, special-purpose acquisition companies (SPACs), and startups operating on a global scale. Key players in the market include:

Alphabet Inc. (Google Cloud Platform)

Microsoft Corp. (Microsoft Azure)

Amazon.Com, Inc. (Amazon Web Services, Inc.)

IBM

Intel Corporation

Alibaba Group (Alibaba Cloud)

Nvidia Corporation

Toshiba Corporation

Rigetti Computing

Quantum Computing Inc.

QC Ware Corp.

Ion Q

Zapata Computing

Xanadu Quantum Technologies Inc.

D-Wave Systems Inc.

For more information about this report visit https://www.researchandmarkets.com/r/s783ia

About ResearchAndMarkets.comResearchAndMarkets.com is the world's leading source for international market research reports and market data. We provide you with the latest data on international and regional markets, key industries, the top companies, new products and the latest trends.

Read the original here:
Global Quantum Computing Market Projected to Reach $856.33 Million by 2023, with a CAGR of 40.07% - Yahoo Finance

Read More..

Quantum Brilliance Partners with STFC Hartree Centre to Make … – HPCwire

LONDON, Nov. 1, 2023 Quantum Brilliance, a leading developer of miniaturized, room-temperature quantum computing products and solutions, today announced a strategic collaboration with the UK-based Science and Technology Facilities Council (STFC)s Hartree Centre, one of Europes largest multidisciplinary scientific research organizations, to accelerate quantum computing toward wide application of room-temperature diamond quantum devices.

The new partners are working to integrate quantum accelerators with high-performance computing (HPC). Quantum computing is in an early stage referred to as the noisy intermediate-scale quantum (NISQ) era. The point at which quantum will outperform classical computing in most applications may be decades away. A hybrid approach, with classical working in parallel with quantum, is considered optimal in the interim.

Quantum Brilliances pioneering room-temperature quantum accelerators deliver parallel hybrid computing for practical industry challenges. Scalability issues currently limit the application of quantum computing to industryrelevant challenges, such as in chemistry, drug discovery and finance. The Hartree Centre will utilize the accelerators to improve classicalquantum algorithm scalability on NISQ devices.

We are pleased to work together with the STFC Hartree Centre on quantum parallelization strategies, said Quantum Brilliance Managing Director Mark Mattingley-Scott. Our shared goal of accelerating greater and increasingly practical use of the portable, diamond quantum devices available today and in development for the near future will enable governments, industries, enterprises and more to begin their quantum journeys now and scale with each subsequent innovation.

Shared ambitions within the partnership include providing practical proofs of concept of hybrid parallel algorithms, achieving new innovations and paving paths toward massively parallelized HPC centers.

We are looking forward to further enhancing, with Quantum Brilliance, room-temperature diamond quantum technology benefitting the economic growth and competitiveness of the UK, said Prof. Katherine Royse, Director of the STFC Hartree Centre, As we work together to create useful, meaningful impact on a tangible, human scale, we invite other organizations and individuals who share our commitment to join us in this endeavor.

The Hartree Centres core expertise in HPC, parallel computing and derisking emerging technologies for industrial use, along with Quantum Brilliances trailblazing Quantum Accelerator and Qristal Emulator software, can significantly improve the scalability of quantum-enhanced algorithms. The Advanced Computing and Emerging Technologies Team at The Hartree Centre will be at the forefront of carrying out this groundbreaking work, using the Qristal SDK to develop and test novel quantum algorithms for real-world applications specifically designed for quantum accelerators.

Quantum Brilliances quantum systems use synthetic diamonds to operate at room temperature in any environment. Unlike large mainframe quantum computers, Quantum Brilliances small-form devices do not require cryogenics, vacuum systems or precision laser arrays, consuming significantly less power. Potential use cases include massively parallelized clusters of accelerators for computational chemistry and embedded accelerators for edge computing applications such as robotics and autonomous vehicles.

Currently the size of a desktop PC, the company is working to further miniaturize its technology to the size of a semiconductor chip that can be used on any device, wherever classical computers exist today, unlocking practical quantum computing for everyone.

To learn more, visit http://www.quantumbrilliance.com.

About Quantum Brilliance

Founded in 2019, Quantum Brilliance is a venture-backed quantum products and solutions company developing diamond quantum computers supported by software and applications. Quantum Brilliances goal is to enable mass deployment of its quantum technology to propel industries to harness edge computing applications and next-generation supercomputers. Quantum Brilliance has global partnerships in the Americas, EMEA and Asia Pacific, working with governments, supercomputing centers, research organizations and industry.

About the STFC Hartree Centre

The Hartree Centre helps UK businesses and organizations of any size to explore and adopt innovative supercomputing, big data analytics and artificial intelligence (AI) technologies for enhanced productivity, smarter innovation and economic growth. Backed by significant UK Government funding and strategic industry partnerships, the Hartree Centre is home to some of the most advanced digital technologies and experts in the UK. As part of the Science and Technology Facilities Council and UK Research and Innovation, the Hartree Centre builds on a wealth of established scientific heritage and a network of international expertise, helping the UK to stay at the forefront of pioneering computational science and digital innovation.

Source: Quantum Brilliance

View original post here:
Quantum Brilliance Partners with STFC Hartree Centre to Make ... - HPCwire

Read More..

QuEra Wins Two Grants from DARPA as Part of the Imagining Practical Applications for a Quantum Tomorrow (IMPAQT) Program – Quantum Computing Report

QuEra Computing, a company that develops neutral-atom quantum computers, has received two grants from the Defense Advanced Research Projects Agency (DARPA) as part of the Imagining Practical Applications for a Quantum Tomorrow (IMPAQT) program.The program aims to explore novel quantum algorithms and applications that can be implemented on near-term quantum platforms.

The first grant is for a project on Quantum Reservoir Learning using Neutral Atoms and its Applications. This project will extend QuEras previous work on quantum machine learning, where they demonstrated how to classify handwritten digits using their neutral-atom hardware.The project will scale up the method and apply it to more realistic problems, such as image recognition and natural language processing.

The second grant is for a project on Error-Corrected Quantum Architectures Based on Transversal Logical Gates. This project will investigate how to improve the reliability and scalability of quantum computation using transversal logical gates, which are special operations that can correct errors without disturbing the quantum information.The project will analyze the performance of these gates on QuEras neutral-atom hardware and compare them to other error-correction schemes.

In addition, QuEra has five partners that have also received DARPA IMPAQT grants for their projects which will use QuEras neutral-atom hardware.These include the following partners and projects.

QuEras technology is based on large-scale arrays of neutral atoms, which are atoms that have no electric charge. QuEra uses rubidium atoms, which are trapped in a vacuum chamber by laser beams.The atoms can be manipulated by changing the intensity and frequency of the lasers, creating qubits that can store and process quantum information.

QuEras hardware has several advantages over other quantum platforms, such as superconducting circuits or trapped ions. For example, QuEra can create qubits with high coherence, which means they can preserve their quantum state for longer times.QuEra can also reconfigure the layout of the qubits using its field-programmable qubit array (FPQA) technology, which allows for more flexibility and efficiency in quantum computation.

QuEra currently offers access on the Amazon Braket cloud system to its Aquila-class machines, which have up to 256 qubits. The company is also working on scaling up its hardware to achieve higher numbers of qubits and higher performance.QuEras software package, Bloqade, helps users express and test their quantum problems on the hardware.

Additional information can be found in a press release posted on the QuEra website here.

October 31, 2023

See original here:
QuEra Wins Two Grants from DARPA as Part of the Imagining Practical Applications for a Quantum Tomorrow (IMPAQT) Program - Quantum Computing Report

Read More..