Page 139«..1020..138139140141..150160..»

Engineering Cancer’s End: Moffitt Scientists Say Bioengineering Will Change Our Ability To Research And Treat Cancer – Eurasia Review

Bioengineering is revolutionizing cancer research, andMoffitt Cancer Centeris at the forefront of this transformative movement. Moffitt is the first National Cancer Institute-designated comprehensive cancer center with a dedicated bioengineering department. This area of science integrates engineering and physical sciences with oncology to change how we understand and treat this complex disease. In anew commentary publishedinCancer Cell,W. Gregory Sawyer, Ph.D., andElsa R. Flores, Ph.D., share their visionary framework to accelerate cancer discovery and therapy breakthroughs through bioengineering.

Cancers complexity has been a formidable obstacle for researchers, said Sawyer, chair of MoffittsDepartment of Bioengineering. Traditional methods often struggle to capture the intricate interplay between cancer cells, the immune system and the surrounding environment. Cancer engineering offers a unique perspective by integrating these diverse fields, creating a powerful platform to develop next-generation solutions.

Cancer engineering blends 12 key fields, including system dynamics, nanomaterials, robotics, and biofabrication, to tackle cancer from all angles. This powerful platform could lead to advancements in early detection with microfluidic devices and advanced imaging techniques. Additionally, nanomaterials engineered on a microscopic level could revolutionize drug delivery by transporting medications directly to cancer cells with minimal impact on healthy tissues.

The potential doesnt stop there. 3D bioprinting technology offers the potential to create customized tumor models, allowing researchers to test drug efficacy and personalize treatment plans for individual patients. Sophisticated mathematical modeling, informed by engineering principles, could provide a deeper understanding of cancers intricate biological processes, paving the way for developing more effective therapies.

The possibilities unlocked by cancer engineering are truly exciting, said Flores, associate center director ofBasic Scienceat Moffitt. We envision more universities and cancer centers following Moffitts lead and creating dedicated cancer engineering programs to foster collaboration and accelerate progress in the fight against cancer.

Here is the original post:

Engineering Cancer's End: Moffitt Scientists Say Bioengineering Will Change Our Ability To Research And Treat Cancer - Eurasia Review

Read More..

New method extracts lithium from seawater, to boost battery production – Interesting Engineering

Researchers have optimized a new method for extracting lithium from widespread sources such as seawater, groundwater, and flowback water (a byproduct of fracking and offshore drilling).

Developed by researchers at the University of Chicago Pritzker School of Molecular Engineering (PME), the method shows how certain particles of iron phosphate can most efficiently pull lithium out of dilute liquids.

The new method is expected to hasten an era of faster, greener lithium extraction.

Our method allows the efficient extraction of the mineral from very dilute liquids, which can greatly broaden the potential sources of lithium, said Chong Liu, Neubauer Family Assistant Professor of Molecular Engineering.

Right now there is a gap between the demand for lithium and the production. Our method allows the efficient extraction of the mineral from very dilute liquids, which can greatly broaden the potential sources of lithium.

The method isolates lithium based on its electrochemical properties, using crystal lattices of olivine iron phosphate.

Because of its size, charge and reactivity, lithium is drawn into the spaces in the olivine iron phosphate columns like water being soaked into the holes in a sponge. But, if the column is designed perfectly, sodium ions, also present in briny liquids, are left out or enter the iron phosphate at a much lower level, according to the study.

Researchers tested how variation in olivine iron phosphate particles impacted their ability to selectively isolate lithium over sodium.

When you produce iron phosphate, you can get particles that are drastically different sizes and shapes, said PhD student and first author Gangbin Yan.

In order to figure out the best synthesis method, we need to know which of those particles are most efficient at selecting lithium over sodium.

The study details how researchers synthesized olivine iron phosphate particles using diverse methods, resulting in particle sizes ranging from 20 to 6,000 nanometers. These particles were then grouped by size and used to construct electrodes for extracting lithium from a weak solution, as reported by Phys.org.

Researchers observed that overly large or small iron phosphate particles tended to allow more sodium into their structures, leading to less pure lithium extractions.

It turned out that there was this sweet spot in the middle where both the kinetics and the thermodynamics favor lithium over sodium, said Liu.

We have to keep this desired particle size in mind as we pick synthesis methods to scale up But if we can do this, we think we can develop a method that reduces the environmental impact of lithium production and secures the lithium supply in this country.

Amid a rising demand for electric vehicles, the demand for lithium the mineral required for lithium-ion batteries has also soared. However, current methods of extracting lithium from rock ores or brines are slow and come with high energy demands and environmental costs. In contrast, the new method is environment-friendly and faster than other current methods.

The study was published in the journal Nature on June 7.

NEWSLETTER

Stay up-to-date on engineering, tech, space, and science news with The Blueprint.

Prabhat Ranjan Mishra Prabhat, an alumnus of the Indian Institute of Mass Communication, is a tech and defense journalist. While he enjoys writing on modern weapons and emerging tech, he has also reported on global politics and business. He has been previously associated with well-known media houses, including the International Business Times (Singapore Edition) and ANI.

Continued here:

New method extracts lithium from seawater, to boost battery production - Interesting Engineering

Read More..

Solvent engineering for scalable fabrication of perovskite/silicon tandem solar cells in air – Nature.com

Distinction of different alcohols as solvents

The perovskite films were fabricated by a two-step sequential deposition method based on previous work15,23. As depicted in Fig.1a, our process combines co-evaporation and blade-coating techniques to meet the requirements for large-area fabrication of the perovskite films. Supplementary Fig.2 shows the deposition of an inorganic framework on both glass and textured silicon substrates. It is worth noting that the second step was implemented in air to match the realistic production environment. However, ethanol and isopropyl alcohol, which are widely used as solvents of the organic salt in the second step, confront two major challenges in the natural environment: firstly, these solvents readily absorb environmental moisture38; secondly, the rapid evaporation rate of the solution will affect the film uniformity. Consequently, these challenges often result in inhomogeneous and poor perovskite films, adversely affecting the PCE and stability of the devices.

a Schematic of the hybrid two-step deposition method. b Physical parameters of different alcohols. c Images of organic salts used different alcohols and after exposed to air for 1h. d Images of perovskite films after blade-coating organic salts without gas-quenching and annealing. The direction of blade-coating is from left to right.

To address this issue, we carried out analysis and study on various alcohols with different saturated vapor pressures and polarities including ethyl alcohol (EA), isopropanol (IPA), n-butanol (nBA) and n-pentanol (nPA). The images of different solutions after adding organic salts to the alcohol are shown in Supplementary Fig.3. For ease of expression, we refer to the following solutions, films and devices fabricated with ethanol as EA solution, EA film and EA device, as well as for IPA, nBA and nPA. As the carbon chain is lengthened, both the polarity of alcohols and the saturated vapor pressure decrease, as illustrated in Fig. 1b41. The saturated vapor pressure reflects the evaporation speed of the solvent, while the dielectric constant is positively related to the polarity of the solvent. Following the principle that like dissolves like42, the mutual solubility of alcohols and waterand thus their capacity to absorb moistureis dictated by their polarity difference. Given waters high polarity, alcohols with greater polarity are more soluble in water, leading to increased water absorption.

To investigate the impact of moisture on these different alcohol solutions in air, we exposed a measured amount of each solution to open air and observed the changes. In air environment, moisture absorption leads to rapid oxidation of I to I2, manifesting as a yellowing of the solution43,44. As shown in Fig.1c, the EA and IPA solutions turned from colorless to light yellow after one hour of exposure, while nBA and nPA solutions exhibited no significant color change, underscoring the protective effect of low polarity solvents against moisture interference. Furthermore, we compared the films after blade-coating without gas quenching and annealing on the same substrate (glass/inorganic frame) and documented the changes photographically. Figure1d illustrates that EA and IPA volatilize fastly and completely after blade-coating, in contrast to nBA and nPA films, which shows a gradual darkening. This shift signifies a decrease in volatilization rate with increasing carbon chain length, affecting perovskite crystallization dynamics. However, the slower volatilization rates results in the lingering of residual organic salts, which continues to undergo dissolution-recrystallization reactions with the perovskite45. This leads to localized accumulations of organic salts, as evidenced in Supplementary Figs.4 and 5.

Supplementary Fig.6 displays images of perovskite films fabricated using different alcohols, both in N2 and air environments. These images corroborate the notion that moisture positively affects the crystallization rate of perovskite films46, as inferred from the observable color changes. To further evaluate the effect of the volatilization rate of the solutions on the perovskite films formation, we compared the morphology and the structure of perovskite films by using a scanning electron microscope (SEM) and X-ray diffraction (XRD). Supplementary Fig.7 reveals a pronounced PbI2 signal in EA films before annealing, leading to a substantial amount of PbI2 at the bottom of the perovskite layer (Fig.2a and e). This indicates that the conversion from the inorganic framework to perovskite is incomplete. Such findings suggest that the delay of solvent volatilization rate is conducive to prolonging the reaction of inorganic frameworks with organic salt solutions in terms of promoting the transformation of the inorganic framework into perovskite.

ad Top-view and cross-sectional SEM images. e XRD patterns of perovskite films after annealing. f PL spectra of perovskite films with the emission from the glass side. g Time-resolved PL transients of perovskite films. For TRPL, double exponentials were used for fitting the curves. hj PL mapping of perovskite films with the active area for 1.5cm1.5cm.

Comparatively, perovskite films fabricated in air environment exhibit a heightened PbI2 signal (Supplementary Fig.8 and Fig.2e), demonstrating that the moisture absorbed during fabrication prompts the decomposition of perovskite films upon air annealing. Specifically, the IPA films show a strong PbI2 diffraction peak located at 12.6 (Fig.2e), which stemmed from the decomposition of perovskite films after air annealing at 35% relative humiditya finding consistent with SEM image of the IPA films (Fig.2b). Impressively, the nBA films exhibited the lowest intensity of PbI2 peak in Fig.2e, with minimal residual PbI2 particles observed on the surface (Fig.2c), indicating negligible perovskite decomposition. However, a strong PbI2 signal was found in the nPA films with the solvent volatilization rate further slowed down (Fig.2e), which was attributed to the destruction of perovskite structure by residual solution (Fig.2d). Despite the low polarity of the nPA, the reduction in solution evaporation rate inadvertently introduces excessive H2O into the perovskite films, exacerbating degradation during annealing34. The UV-vis spectra and Tauc-plots of perovskite films fabricated using various alcohols are shown in Supplementary Figs.9 and 10. Additionally, the UV-vis spectra of the inorganic framework are detailed in Supplementary Fig.9. These results elucidate both the polarity and evaporation rate of the solvent have a joint effect on H2O absorption levels. In this view, nBA emerges as the optimal solvent for our specific requirements.

To discern the impact of different alcohol solvents on the defect density of perovskite layers, we performed steady-state photoluminescence (PL) measurements on samples with the configuration of glass/perovskite. As shown in Fig.2f, for EA films, the PL emission peak of the glass side exhibited a blue shift by several nanometers relative to the others. This shift indicates a residual amount of PbI2 at the bottom of the perovskite, owing to the incomplete conversion of PbI2. Notably, the nBA films exhibited the highest PL intensity, surpassing both IPA and nPA films. This enhancement is attributed to the enlarged grain size and effective elimination of PbI2, which in turn reduces the density of grain boundaries and suppresses the non-radiative recombination. In addition, the time-resolved photoluminescence (TRPL) measurements further supported these findings, with the lifetime of each sample recorded at 136.3, 146.6, 350.7 and 142.9ns, respectively (Fig.2g). These results highlight the superior performance of nBA in minimizing non-radiative recombination within the perovskite bulk.

We then performed PL mapping test to investigate the homogeneity of the films, as shown in Fig.2hj. Given the significant amount of PbI2 in EA filmswhich will notably passivate the defects and enhance the PL signal strength (as detailed in Supplementary Fig.11)EA films were excluded from this part of the analysis. The nBA and nPA films demonstrated superior uniformity compared to the IPA films, a trait ascribed to their lower saturated vapor pressure. This characteristic, combined with the solvents extended chain length, leads to slower volatilization, while reduced polarity further restricts water ingress into the films. Both factors contribute to a diminished crystallization rate of perovskite, yielding films with enhanced homogeneity47. However, the slow volatilization rate of the solvent allows the residual solution to continue interacting with the perovskite through dissolution-crystallization reactions. This process tends to produce a non-optically active -phase and creates voids within the bulk48, culminating in a diminished PL mapping signal in nPA films. Overall, the nBA films demonstrated less non-radiative recombination and superior uniformity, making them conducive to the scale-up fabrication of perovskite films.

We fabricated the single-junction perovskite solar cells with an architecture of Glass/ITO/NiO/SAM/1.68 eV-perovskite/C60/SnOx/Cu. The schematic structure is shown in Fig.3a while the detailed photovoltaic parameters of the devices with an active area of 0.049cm2 applying EA, IPA, nBA and nPA are summarized in Supplementary Table3 and Fig.3b. For further comparison, we constructed devices under two distinct conditions: an N2 environment and ambient air, with their respective photovoltaic parameters detailed in Supplementary Fig.12. Devices fabricated in air exhibit smaller VOC compared to those fabricated in N2 glove box, which can be attributed to moisture-induced films deterioration. More notably, air-fabricated devices generally suffered from pronounced efficiency losses, except for those using nBA solvent. This exception highlights nBAs resilience to air exposure during fabrication, with such devices achieving the highest conversion efficiency. In our champion devices, nBA devices displayed distinct advantages in VOC, JSC, and FF with a narrower distribution proving its higher repeatability, as shown in Fig.3b. According to Fig.3c and Supplementary Table4, the improvement of nBA devices in VOC and JSC compared with IPA groups was attributed to the lower non-radiative recombination loss and parasitic absorption caused by PbI2 in the surface and bulk, which was also beneficial to the cells' light stability (as shown in Supplementary Fig.13). The integrated JSC value from the external quantum efficiency (EQE) curve in Fig.3d was calculated to be 20.81 and 20.99mAcm2, respectively, corresponding well with the values obtained from JV measurements. Compared with the IPA devices, the nBA displayed improved charge collection, particularly between 400 and 600nm, due to the larger grain sizes minimizing recombination49. In order to prove the influence of uniformity on the performance of large-area devices, we compared the JV curves of devices with a 1.044cm2 aperture area fabricated by IPA and nBA (Fig.3e and Supplementary Figs.16 and 17), and the specific data are shown in Supplementary Table5. The nBA devices outperformed the IPA counterparts in terms of FF and JSC, attributed to superior uniformity. Additionally, from the EQE spectra of eight cells with a small area of 0.049cm2 (Supplementary Figs.14 and 15), we observed that the nBA devices exhibited a much narrower distribution of the corresponding integrated current. Furthermore, we fabricated PSCs with an area of 1.044cm2, producing 15 devices per type. The histogram of their PCE was displayed in the inset of Fig.3e. Moreover, we compared the photovoltaic parameters of devices fabricated by IPA and nBA in different humidity, and the XRD of films were shown as well (Supplementary Figs.18 and 19), which proved that nBA hinders the effect of moisture during the fabrication of devices.

a Schematic architecture of single junction. b Photovoltaic parameters for IPA and nBA devices. c JV curves of the champion opaque devices (0.049 cm2 aperture area). d EQE spectra of the champion device. e JV curves of the champion opaque devices (1.044cm2 aperture area); PCE distributions of 15 devices for each sample are shown inset. f QFLS values extracted from the PL spectra for neat perovskite, HTL/perovskite and HTL/perovskite/ETL. g EL spectra for IPA and nBA perovskite devices. h VOC evolution as a function of light intensities for the IPA and nBA perovskite devices.

We then carried out photoluminescence quantum yield (PLQY) measurements to quantify the quasi-Fermi level splitting (QFLS) in the neat perovskite layers and the stacks by different layers (Fig.3f)50,51,52. The implied VOC values estimated from the PLQY measurements were in good agreement with the values obtained from the JV results. The above results suggested that replacing IPA with nBA could promote the conversion of PbI2, thereby synergistically mitigating the non-radiative recombination losses both in the bulk and in the interface between hole-transport-layer (HTL) and perovskite. The VOC, indicative of the recombination rate within devices, was assessed through the EQE at short circuit current conditions, effectively modeling the device as a light-emitting diode36. Furthermore, under the injection current of 21mAcm2 (equal to short circuit current Jph), the electroluminescence (EL) efficiencies of IPA and nBA devices were 0.1% and 0.4% (Fig.3g), corresponding to the VOC loss of 0.180 and 0.144V, respectively. This result is almost consistent with the JV results, that is, the IPA and nBA devices showed a VOC of around 1.20V and 1.22V. To further study the carrier recombination behavior, we investigated the dependence of the VOC on the light intensity53, as shown in Fig.3h. The semilogarithmic relationship displayed follows the expression with a slope = nkT/q log10e, where n is the diode quality factor. The IPA and nBA devices exhibited n values of 1.633 and 1.481, respectively, indicating reduced trap-assisted recombination in the nBA device.

Considering the need for thicker perovskite layers when fabricating on textured silicon, relevant characterizations for perovskite films on both glass and textured silicon substrates were conducted, as shown in Supplementary Figs.2026. However, the limited solvent penetration depth of IPA led to a significant amount of unreacted PbI2 in the underlying layer and further degraded the performance of the devices. While complete conversion of the inorganic framework to perovskite is achievable through adjustments in parameters like quenching gas pressure and blade-coating rate54,55, such modifications can detract from film uniformity and device performance, as evidenced in Supplementary Figs.2730. Consequently, parameter tuning was not utilized to fully convert IPA films in tandem devices.

Specifically, the illustration of the tandem device is demonstrated in Fig.4a and a broader area showing the top-view as well as cross-sectional SEM images of the bottom SHJ is seen in Supplementary Fig.31. The performance of SHJ cell with and without semitransparent perovskite as a filter is shown in Supplementary Fig.32 and Supplementary Table6. It can be clearly seen from Fig.4b that the textured surface with pyramid sizes of 23m was well-covered by the conformally coated perovskite films as well as other functional layers. The corresponding device performance is depicted in Fig.4c and d; a tandem solar cell with an active area of 1.044cm2 achieved a champion PCE of 29.4% (VOC=1.83V, JSC=20.45mAcm2 and FF=78.63%) under reverse scan and the stabilized PCE was observed to be 28.8%. Moreover, an independently certified efficiency of 28.7% was tested from Fraunhofer ISE (shown in Supplementary Fig.33).

a Schematic diagram of perovskite/SHJ tandem solar cell. b Cross-sectional SEM images of perovskite/SHJ (average pyramid size is 23m) tandem for nBA devices. c JV curves of the tandem device (1.044cm2 aperture area); the digital photo of a device is shown in the inset. d MPP tracking of the tandems; PCE distributions of 16 individual tandem devices for each type is shown in the inset. e EQE spectra of a current-matched fully textured monolithic perovskite/SHJ tandem cell. f JV curves of the tandem device (16 cm2 aperture area); the digital photo of a device is shown in the inset.

As shown in Fig.4d, the integrated JSC value of the front and back subcell from EQE spectra (Fig.4e) was 20.62 and 20.51mAcm2, respectively, which was in good agreement with the JSC value determined from the JV measurements considering the loss caused by Ag grid. We further evaluated the operation stability of encapsulated tandem solar cells by measuring the maximum power output under 1-sun-equivalent illumination in ambient air with a relative humidity of 3050%. The encapsulated device retained 96.8% of its initial PCE after 780h of maximum power point (MPP) tracking (Supplementary Fig.34).

To validate the applicability of our approach for scalable fabrication, we applied blade-coating to produce perovskite films on a 36cm glass substrate. Subsequent steady PL and XRD tests conducted on samples from different regions of perovskite films (Supplementary Figs.35 and 36) demonstrated superior uniformity in nBA films compared to IPA films. Furthermore, we fabricated 36 cm2 perovskite/silicon tandem cells (aperture area, 16 cm2) and achieved a conversion efficiency of 26.3% (VOC=1.815V, JSC=18.54mAcm2, FF=78.31%), which is among the highest PCE of large-area perovskite/silicon tandem cells11. The consistency of the EQE spectra at different regions suggested that the film exhibited excellent uniformity (Supplementary Fig.37).

For the further development of perovskite/silicon tandem solar cells, scaling up the size of the perovskite films to M6 (166mm*166mm) becomes essential, a goal that proves challenging with blade-coating due to issues with film uniformity. Therefore, slot-die coating, an expandable technology that allows continuous liquid injection, emerges as the preferable future method49,56. Critical to this method is the complete conversion of the inorganic framework into perovskite films, achievable through careful adjustment of precursor solution concentration, the injection rate of the precursor solution, the rate of slot-die, gap distances between the blade and substrate, and quenching gas pressure. Digital photographs of the perovskite films fabricated under these conditions are depicted in Supplementary Figs.3840. With the optimum slot-die coating parameters (1mL/min, 100 m and 30 PSI), we achieved perovskite films with excellent homogeneity (Supplementary Fig.41). The optimal device delivered a PCE of 25.9% for 16cm2 (VOC=1.823V, JSC=18.50mAcm2, FF=76.63%), as shown in Supplementary Fig.42. These results are anticipated to surpass the efficiency of devices fabricated by the blade-coating in the future.

Read the original:

Solvent engineering for scalable fabrication of perovskite/silicon tandem solar cells in air - Nature.com

Read More..

US to test hypersonic missile tracking with space-based sensors – Interesting Engineering

The Unites States Missile Defense Agency (MDA) has stated it plans to test its space-based satellites equipped with hypersonic missile tracking sensors. The Hypersonic and Ballistic Tracking Space Sensor (HBTSS) was deployed into orbit in February of this year.

HBTSS is designed to enable the MDA to get an early warning of potential hypersonic missiles. Presently, ground-based systems, while sophisticated, are limited by the curvature of the Earth and the nature of hypersonic missile flight paths.

To this end, sensors located in orbit will have an unobstructed view, enabling more accurate and timely interception. MDA serves as the Defense Departments executive agent for hypersonic defense.

It is racing to stay ahead of threats from Russian and Chinese development efforts. Tracking hypersonic missiles from space is necessary to allow interceptors more time to lock on.

Air Force Lt. Gen. Heath Collins explained at a discussion at the Center for Strategic and International Studies on June 6th that the major challenge with hypersonic missiles is that they re-enter the atmosphere before ballistic missiles are detected, leaving a very small window for interception due to their high speed.

And so instead of being down, looking up to find a hypersonic, you want to be high, looking down to track hypersonic, he continued.

Thats what Hypersonic and Ballistic Tracking Space Sensor is all about.

Although no specific date for the first test has been announced, Lt. Gen. Collins stated that it would occur in approximately a week. The test will involve a dummy target traveling at hypersonic speeds within the satellites field of view.

He said that the test would assess the sensitivity, timeliness, and accuracy of the two systems to meet the demonstration objectives for HBTSS and potentially inform changes or confirm the systems effectiveness, ultimately contributing to the Space Development Agencys future plans.

A second test is also planned for later in the year following the initial test.

The tests are key steps in our ability to prove out that we can close a hypersonic fire control loop from space, Lt. Gen. Collins said.

We are in lockstep working this with the Space Development Agency, and they are alreadyplanning HBTSS-like sensorsin their future tranches of the Proliferated Warfighter Space Architecture to start filling out that truly global hypersonic kill chain.

However, detection and tracking are only part of the solution. The ability to physically intercept and destroy hypersonic missiles is equally important.

Collins explained that the MDA is focused on finding alternative near-term capabilities for the interceptors as it aims to develop the GPI as quickly as possible. Congress is urging the agency to expedite the new interceptors field readiness.

NEWSLETTER

Stay up-to-date on engineering, tech, space, and science news with The Blueprint.

Christopher McFadden Christopher graduated from Cardiff University in 2004 with a Masters Degree in Geology. Since then, he has worked exclusively within the Built Environment, Occupational Health and Safety and Environmental Consultancy industries. He is a qualified and accredited Energy Consultant, Green Deal Assessor and Practitioner member of IEMA. Chris’s main interests range from Science and Engineering, Military and Ancient History to Politics and Philosophy.

Follow this link:

US to test hypersonic missile tracking with space-based sensors - Interesting Engineering

Read More..

Remembering ‘Doc’ Helms: beloved mentor and pioneer in architectural lighting – University of Colorado Boulder

Professor Ron Helms (right) in CU Boulder'sphoto- metric lab in 1967.

In 1973, Illumination 1, which teaches the fundamentals of illuminating engineering, became a required course for all CU Boulder architectural engineering students and has been taught ever since.

Doc nurtured and inspired so many engineers, designers, educators and manufacturing professionals, said Cheryl English, one of Helms students from 1977-81 and a retired lighting executive. We remember Doc for his keen sense of humor and dedication to his students, whom he referred to as his family.

Helms received his undergraduate and masters degrees from the University of Illinois and PhD from Ohio State University. He left CU Boulder in 1981 to become the head of University of Kansas architectural engineering program, where he established the Bob Foley Illumination laboratory in 1985. He then went on to North Carolina Agricultural and Technical State University to establish a new illuminating engineering program; he retired in 2006.

Helms published many papers, authored three textbooks and presented technical programs.

"He wasa strong advocate for the recognition of architectual engineering and illuminating engineering ascredible disciplines in the industry,"English said. "Heworked to establish that recognition in professional organizations and engineeringaccreditations."

Helms leaves behind his four children, eight grandchildren, extended family and "countless lighting professionals who had the privilege of benefiting from his mentorship or attending his education programs, English said.

A memorial service was held on June 1 at Westminster Presbyterian Church in Greensboro, North Carolina.

Read the rest here:

Remembering 'Doc' Helms: beloved mentor and pioneer in architectural lighting - University of Colorado Boulder

Read More..

Super impressive wall of wind turbines yield 2,200 kWh of quiet energy – Interesting Engineering

A wind fence developed by New York-based designer Joe Doucet is set to bring clean energy production into urban landscapes. The fence consists of vertical wind turbines, is modular, and, most importantly, is pleasing to the eye, making it more likely to be adopted in hotels, corporate buildings, and residential units.

Wind energy is an important component of the renewable energy mix that countries have adopted as they aim for a future away from fossil fuels. To achieve maximum energy gain and efficiency, original equipment manufacturers (OEMs) build bigger turbines every year for large installations.

This has been preventing wind energy from participating in distributed energy generation, much like solar panels can be installed on rooftops, in gardens, and now even on balconies.

In 2021, Doucet was researching distributed energy products for wind energy and found that few good options were available. So, the designer did what he could best: design a new product that was both efficient and scored on aesthetics.

Doucets original design was called the Wind Turbine Wall. Over the last two years, the designer has developed and tested the concept several times, with the majority of changes affecting the shape and size of the blades.

With the team at Airiva, a company he co-founded with energy industry veteran Jeff Stone, Doucet put 16 designs of vertical turbine blades through the motions to arrive at three final versions that made it to the wind tunnel testing.

After rigorous testing at two facilities in the US, the team was convinced that the helical structure of the turbine blades was the most efficient. This isnt the first time someone is working with helically shaped blades in a vertical turbine. However, where Airiva claims to have made good progress is how to get maximum benefit by placing multiple blades operating simultaneously.

In a standard setup, where eight helical blades are precisely arranged, the Wind Fence generates about 2,200 kilowatts of energy annually.

From the output from a single unit, an average US household would need five Wind Fence units to remove its dependence on the grid completely. This might not sound too much until you realize that each unit measures nearly 14 feet (4.2 m) by seven feet (2.1 m).

Airivas team isnt seeking residential customers to buy their product. Since the concept was first unveiled in 2021, corporations, public institutions, and real-estate firms have been keen to install it on their premises.

The noiseless movement of the wind turbines makes clean energy generation aesthetically pleasing and helps make a statement about the companys transition to a greener planet. Airiva plans to use 80 percent recycled material in its production.

The solution is modular, and one can install an array of units to increase energy production at a facility. Even then, the Wind Fence wouldnt match the energy output of a massive horizontal turbine. But that isnt a target Airiva is trying to beat either.

The advantage of distributed energy systems is that they see fewer energy losses during transmissions since they are generated so close to the site of usage.

The company is still some distance away from installing its units commercially, though. Custom pilots could happen later this year, and the first orders will come in 2025.

NEWSLETTER

Stay up-to-date on engineering, tech, space, and science news with The Blueprint.

Ameya Paleja Ameya is a science writer based in Hyderabad, India. A Molecular Biologist at heart, he traded the micropipette to write about science during the pandemic and does not want to go back. He likes to write about genetics, microbes, technology, and public policy.

More:

Super impressive wall of wind turbines yield 2,200 kWh of quiet energy - Interesting Engineering

Read More..

Automating Prompt Engineering with DSPy and Haystack | by Maria Mestre | Jun, 2024 – Towards Data Science

Teach your LLM how to talk through examples Photo by Markus Winkler on Unsplash

One of the most frustrating parts of building gen-AI applications is the manual process of optimising prompts. In a publication made by LinkedIn earlier this year, they described what they learned after deploying an agentic RAG application. One of the main challenges was obtaining consistent quality. They spent 4 months tweaking various parts of the application, including prompts, to mitigate issues such as hallucination.

DSPy is an open-source library that tries to parameterise prompts so that it becomes an optimisation problem. The original paper calls prompt engineering brittle and unscalable and compares it to hand-tuning the weights for a classifier.

Haystack is an open-source library to build LLM applications, including RAG pipelines. It is platform-agnostic and offers a large number of integrations with different LLM providers, search databases and more. It also has its own evaluation metrics.

In this article, we will briefly go over the internals of DSPy, and show how it can be used to teach an LLM to prefer more concise answers when answering questions over an academic medical dataset.

This article from TDS provides a great in-depth exploration of DSPy. We will be summarising and using some of their examples.

In order to build a LLM application that can be optimised, DSPy offers two main abstractions: signatures and modules. A signature is a way to define the input and output of a system that interacts with LLMs. The signature is translated internally into a prompt by DSPy.

When using the DSPy Predict module (more on this later), this signature is turned into the following prompt:

Then, DSPy also has modules which define the predictors that have parameters that can be optimised, such as the selection of few-shot examples. The simplest module is dspy.Predict which does not modify the signature. Later in this article we will use the module dspy.ChainOfThought which asks the LLM to provide reasoning.

Things start to get interesting once we try to optimise a module (or as DSPy calls it compiling a module). When optimising a module, you typically need to specify 3 things:

When using the dspy.Predict or the dspy.ChainOfThought modules, DSPy searches through the training set and selects the best examples to add to the prompt as few-shot examples. In the case of RAG, it can also include the context that was used to get the final response. It calls these examples demonstrations.

You also need to specify the type of optimiser you want to use to search through the parameter space. In this article, we use the BootstrapFewShot optimiser. How does this algorithm work internally? It is actually very simple and the paper provides some simplified pseudo-code:

The search algorithm goes through every training input in the trainset , gets a prediction and then checks whether it passes the metric by looking at self.metric(example, prediction, predicted_traces). If the metric passes, then the examples are added to the demonstrations of the compiled program.

The entire code can be found in this cookbook with associated colab, so we will only go through some of the most important steps here. For the example, we use a dataset derived from the PubMedQA dataset (both under the MIT license). It has questions based on abstracts of medical research papers and their associated answers. Some of the answers provided can be quite long, so we will be using DSPy to teach the LLM to prefer more concise answers, while keeping the accuracy of the final answer high.

After adding the first 1000 examples to an in-memory document store (which can be replaced by any number of retrievers), we can now build our RAG pipeline:

Lets try it out!

The answer to the above question:

Ketamine inhibits the proliferation of rat neural stem cells in a dose-dependent manner at concentrations of 200, 500, 800, and 1000M. Additionally, ketamine decreases intracellular Ca(2+) concentration, suppresses protein kinase C- (PKC) activation, and phosphorylation of extracellular signal-regulated kinases 1/2 (ERK1/2) in rat neural stem cells. These effects do not seem to be mediated through caspase-3-dependent apoptosis.

We can see how the answers tend to be very detailed and long.

We start by creating a DSPy signature of the input and output fields:

As we can see, we already specify in our description that we are expecting a short answer.

Then, we create a DSPy module that will be later compiled:

We are using the Haystack retriever previously defined to search the documents in the document store results = retriever.run(query=question). The prediction step is done with the DSPy module dspy.ChainOfThought which teaches the LM to think step-by-step before committing to the response.

During compilation, the prompt that will be optimised to look like this:

Finally, we have to define the metrics that we would like to optimise. The evaluator will have two parts:

Our evaluation dataset is composed of 20 training examples and 50 examples in the devset.

If we evaluate the current naive RAG pipeline with the code below, we get an average score of 0.49.

Looking at some examples can give us some intuition on what the score is doing:

Question: Is increased time from neoadjuvant chemoradiation to surgery associated with higher pathologic complete response rates in esophageal cancer?

Predicted answer: Yes, increased time from neoadjuvant chemoradiation to surgery is associated with higher pathologic complete response rates in esophageal cancer.

Score: 0.78

But

Question: Is epileptic focus localization based on resting state interictal MEG recordings feasible irrespective of the presence or absence of spikes?

Predicted answer: Yes.

Score: 0.089

As we can see from the examples, if the answer is too short, it gets a low score because its similarity with the ground truth answer drops.

We then compile the RAG pipeline with DSPy:

After we do this and re-evaluate the compiled pipeline, the score is now 0.69!

Now its time to get the final optimised prompt and add it into our Haystack pipeline.

We can see the few-shot examples selected by DSPy by looking at the demos field in the compiled_rag object:

There are 2 types of examples provided in the final prompt: few-shot examples and bootstrapped demos, like in the prompt shown above. The few-shot examples are question-answer pairs:

Whereas the bootstrapped demo has the full trace of the LLM, including the context and reasoning provided (in the rationale field below):

All we need to do now is extract these examples found by DSPy and insert them in our Haystack pipeline:

Our new pipeline becomes:

Lets check the same question we tried before.

Question:

What effects does ketamine have on rat neural stem cells?

Before:

Ketamine inhibits the proliferation of rat neural stem cells in a dose-dependent manner at concentrations of 200, 500, 800, and 1000M. Additionally, ketamine decreases intracellular Ca(2+) concentration, suppresses protein kinase C- (PKC) activation, and phosphorylation of extracellular signal-regulated kinases 1/2 (ERK1/2) in rat neural stem cells. These effects do not seem to be mediated through caspase-3-dependent apoptosis.

After:

Ketamine at higher concentrations inhibits the proliferation of rat neural stem cells, while not affecting apoptosis. Additionally, it decreases intracellular calcium concentration and suppresses PKC activation and ERK1/2 phosphorylation in these cells.

Job done!

In this post, we have used DSPy to optimise the prompt used in a Haystack RAG pipeline. We have done so by using a custom metric based on Haystacks evaluation framework that penalised the LLM for long answers while keeping the similarity with the correct answer high. With this approach, we have managed to improve our performance by almost 40% without having to do any manual prompt engineering.

More:

Automating Prompt Engineering with DSPy and Haystack | by Maria Mestre | Jun, 2024 - Towards Data Science

Read More..

Data science and business minors to move to School of Engineering, Owen in Fall 2024 – The Vanderbilt Hustler

As of Fall 2024, the undergraduate business and data science minors will be housed in the Owen Graduate School of Management and the School of Engineering, respectively, instead of Vice Provost for Undergraduate Education Tiffiny Tungs portfolio. Tung emphasized that these programs will retain their interdisciplinary qualities, which she said is a core aspect of a Vanderbilt education.

This shift comes amid Vanderbilt Law School offering a new introduction to legal studies minor starting in Fall 2024. The business and legal studies minors will be governed by students home schools academic policies due to Owen and VLS not having undergraduate academic policies.

Were hoping that well get more classes added [for these minors] because now those programs will be in an academic school or department that can focus on curricular development, which will be nice for enhancing student options, Tung said. I respect the autonomy of academic departments and schools and the professors within them. Because theyre on the ground teaching in the classroom, they know really well what to develop for the students.

Director of the Business Minor Program Gary Kimball said the structure of the business minor will largely remain the same despite moving to a professional school.

One change that students will see in the future is that Owen faculty will begin offering [undergraduate] elective courses as well, giving students even more choices, Kimball said in a message to The Hustler.

Dr. Charreau Bell, director of the data science minor, said its move to the School of Engineering was a collaborative effort among Vanderbilt leaders. She was unsure whether the minor will eventually move to the College of Connected Computing.

The move to Engineering is fortuitous, Bell said. We have this new capacity to leverage the resources for academic programs within a school. I think were going to be able to grow more and offer more electives, which is something Im very happy about.

Here is the original post:

Data science and business minors to move to School of Engineering, Owen in Fall 2024 - The Vanderbilt Hustler

Read More..

Richest 0.1% in UK emit 22x more transport emissions than low earners – Interesting Engineering

A groundbreaking report by the Institute for Public Policy Research (IPPR) has unveiled a stark reality in the UKs transport emissions landscape, highlighting a significant disparity in contributions to the climate crisis.

Emissions from travel are not fairly shared across people living in Great Britain, says the report.

The research reveals that the nations wealthiest individuals are disproportionately responsible for these emissions.

As per the study, the richest 0.1% of the population have been found to emit a staggering 22 times more emissions from transport compared to low earners, and 12 times more than the national average.

This revelation comes amidst increasing concerns about climate change and its devastating impacts, including soaring temperatures and catastrophic weather events.

Globally, we are not on track to keep warming below 1.5C and have not made the required commitments to keep warming below 2C, expressed the report titled Moving Together: A People-focussed Pathway to Fairer and Greener Transport.

The report highlights that half of all transport emissions in Britain originate from just one-fifth of the population. Besides, the top 10% of polluters are responsible for a staggering 42% of all transport emissions.

This startling statistic paints a clear picture of the unequal distribution of environmental burden within the country. This finding further emphasizes the concentration of environmental impact within a small segment of society.

Moreover, a closer examination of travel patterns reveals a direct correlation between wealth and distance traveled.

People with an income over 100k travel at least double as far each year as those under 30k, and almost three times further than those under 10k, the researchers mentioned.

These numbers suggest that higher income levels facilitate increased mobility and, consequently, higher emissions.

There is huge disparity between the emissions from transport of the wealthiest and those on lowest incomes, commented the report.

It also sheds light on demographic disparities in transport emissions.

Men, individuals aged 35-64, and residents of less deprived areas tend to have higher emissions levels, while those with disabilities, non-white British ethnicities, and individuals from more deprived backgrounds tend to emit less.

This finding underscores the complex interplay between socioeconomic factors and environmental impact.

Our transport system both reflects and contributes to social inequalities. Reducing emissions can actually tackle some of that injustice, if done fairly, said Dr Maya Singer Hobbs, senior research fellow at IPPR.

Alarmingly, the UKs progress in reducing transport emissions over the past three decades has been minimal, with the transport sector now standing as the countrys largest emitter.

The report urges the government to take decisive action to address this inequality and accelerate efforts towards decarbonization.

Among the suggestions offered by the report are the implementation of new taxes on private jets, a mode of transport favored by the wealthy, and improvements to public transportation to provide more sustainable options for all.

Additionally, it calls for a faster transition to electric vehicles to reduce reliance on fossil fuels in the transport sector. This comprehensive study aims to mitigate the environmental impact of the transport sector while addressing the underlying socioeconomic disparities that contribute to it.

Now is not the time to slow down our efforts to reach net zero; doing so just fuels existing transport inequalities, concluded Stephen Frost, a principal research fellow at IPPR.

NEWSLETTER

Stay up-to-date on engineering, tech, space, and science news with The Blueprint.

Aman Tripathi An active and versatile journalist and news editor. He has covered regular and breaking news for several leading publications and news media, including The Hindu, Economic Times, Tomorrow Makers, and many more. Aman holds expertise in politics, travel, and tech news, especially in AI, advanced algorithms, and blockchain, with a strong curiosity about all things that fall under science and tech.

Follow this link:

Richest 0.1% in UK emit 22x more transport emissions than low earners - Interesting Engineering

Read More..

The Evolution of Use Cases in Modern Software Engineering – InfoQ.com

Subscribe on:ApplePodcasts Google Podcasts Soundcloud Spotify Overcast Podcast Feed Transcript

Shane Hastie: Good day, folks. This is Shane Hastie for the InfoQ Engineering Culture Podcast. Today, I have the privilege of sitting down again with Ivar Jacobson.

Ivar has been a guest on the podcast before, and is one of the stalwarts of software engineering. Ivar, welcome. Thanks for taking the time to talk to us today.

Ivar Jacobson: Oh, thank you. I really appreciate the invitation. Thank you.

Shane Hastie: I mentioned that you're a stalwart, but perhaps for some of our audience, they might not have come across your work before, so do you want to give us a brief overview of, who is Ivar Jacobson?

Ivar Jacobson: Well, yes, I will be happy to do that. I didn't know I was interested in methodology when I was 28 years old and worked at Ericsson, and became project manager for our most mission-critical project at that time. It was a computer-based telecom switch. The methodology people used, even if it didn't talk about the methodology, was very close to what the many methods were at that time, separating functions from data.

We had a big data program store, where you put in the functions, and you had a big data store where you put your data, and you developed a program separate from the data. My job was to make sure we built the product that could be marketed around the world, easily adapted to new requirements. And thanks to having a hardware background, I understood what they were doing was really not working, and would never work. I said they, I mean the development team; it was 70 people, or something like that.

As a project manager, I shouldn't put my fingers in that. I should run the project, and the method was already set in stone. It was the intention. But as a bad project manager, I couldn't keep my fingers away from how to do it. So I came up with a way that we now classify as component-based and that resulted in success.

Ericsson was the only telecom vendor that we used that approach and they won everything, but it took 10 years before people at Ericsson knew it. But then I was awarded to get the PhD during work hours. I spent five years doing nothing for Ericsson, only get the PhD. That was very educational because now I started to understand how a computer worked too. Well, a little bit exaggerated. And that was late of my part of my life.

I always was involved in methodology work and I had great interest in it. So they sent me to standardization and we created a modeling language in this standard group that is the predecessor to UML and that was the biggest competitor to UML when UML came out in 1997. So in 1980 say, this telecom standard was adopted and almost 20 years later UML came. And it is very similar languages in some aspect, in other aspects, not.

After my PhD, I set up a company and we developed what later became Rational Unified Process because Rational acquired us and because of what we had and Rational Software grew dramatically. I was there with two other gentlemen, Grady Booch and Jim Rumbaugh. We were nicknamed the Three Amigos and traveled around the world talking about what we had done together, namely Unified Modeling Language and later also Rational Unified Process, which came from my company in Sweden.

That went well until Agile came. Agile just killed first Rational Unified Process; it took only a couple of years, also UML, no one would do modeling anymore. By the way, all that is coming back. It's all coming back and this is what has happened in the history of software engineering. It all comes back. The good ideas come back. And at some point, in time I also came up, it was called use cases and use cases became really the thing that was adopted all around the world.

But that also because it belonged to the old generation so to speak, it was also thrown out and replaced by a simpler approach, user stories, and that's it. Now we are about around 2005 or 2000 something. I have always been interested in artificial intelligence. So in 2000 I founded a company, actually with my eldest daughter, and we developed a product to support software development. So we had copilots, we call them developer assistants.

We used AI technique called intelligent agents and we got a Jolt Award in 2003, if I remember correctly, for one of the most interesting products at that time. At that time, I also was fed up with methodologies to be very honest with you. Methods was just a number of problems. We have methods war, so good people were fighting with one another which had the best method.

We created sects around every method, there were people that loved that method and talked about that method and talked bad about other methods. And you couldn't use... For instance, many people want to use use cases with their method. So they tried to take cases from my method and put into their method, but since they were totally differently described it became not really successful.

So I said that methods put good ideas into prison, namely their own prison. So you couldn't take them or use it at other place, but many problems like that.

So I decided enough is enough for me. We had a new method coming out that was agile and would very well compete with the methods we see today or popular methods like SAFe for instance. Nothing bad said about SAFe, it's probably one of the better if you talk about these full scale methods. But we selected another path. We decided we want to find what is the common ground of all these methods, what do we share? They must share something. They all use the software; all for product development engineering. So we created such a common ground, Essence, and it became an international standard.

And since then we have worked on popularizing it. We have come a very long way, but that's a very different story.

Shane Hastie: There's a lot there. The classic tool of requirements representation coming out of your work is the use case. So a sequence of interactions, a flow of interaction between a person and a system, a tool to represent and communicate the requirements. But one that seemed to, and you alluded to it there, they seemed to go out of favor.

What happened there?

Ivar Jacobson: Yes. When Agile came starting with extreme programming around, let's say, at the beginning of 2000. The Agile movement basically killed any old approach and they were more focused on how people work together, basically social engineering. Agile didn't really introduce any new technical practices. You can say that iterative development using sprints and working in small periods to deliver concrete results was already there. It was used since the early 1980s. There were scientific work around the spiral model of development by Barry Beam.

So it had been around and my own work and the work that we did at Rational Software at that time was definitely iterative. What has probably happened, has happened, is that iterations are short, sprints are shorter than it were at that time. At that time they may a month, now they're down to days, but the idea is basically the same. And then the technical practices you can mention user stories as one of these technical practices.

It was focused on small things that a team could do in such an iteration or sprint. So Agile flirted with the developers, the people really did something of useful value. Whereas in the old days, I must confess, we flirted, if I use that word, the more system engineers as higher level people, not the programmers directly.

We felt that a lot of code is done in modeling. You identify the components, you identify the interfaces without writing a line of code, but of course, you need to understand how it is going to be implemented. So there was a switch that made the majority of developers feel they got support from the methodology, they got support from the Agile way of thinking, and we were the old dinosaurs and everything that we had done was just rubbish, get it out. And that was, of course, something I understood very early.

So I didn't want to continue create a new dinosaur or something that people would call a dinosaur, whatever it was, so we did this common ground instead. But when it comes to use cases, use stores was a really great contribution. I remember when I met Kent Beck who was one maybe the key originator of the user stories at the conference and I went over to him, he was signing books and I said to him, "Congratulations, Kent." I'd known him for many years at that time, 10, 15 years, and actually I'd invited him to come to Sweden and teach and so on, small talk action to my people.

So we were congratulated him and he said something, a compliment back, "Well inspired by use cases," or something like that. So these two things should never have been seen as in conflict. They were extremely good compliments to one another because use cases deal with a bigger picture.

You can, in a couple of hours, present a model of the system in the form of say 20 use cases and everyone will understand what the system is going to do. Of course, to go deeper, you require more thinking, but to get the big picture very quickly. And use case also is language where people talk about from a business perspective. For instance, I don't know any modern product that doesn't talk about which use cases does this product have. And you see it also in normal English. So it has become a normal English word, use case. And if you Google it, you will find maybe a hundred times more hits than you find on user stores. I don't know really, but it's dramatic difference.

So there is a value in use case thinking that help you to understand the big picture of a product. You may have 800 user stories but only a couple of use cases, maybe 10 use cases. And so you get a really good picture of what the system is doing. And people do use cases, they have done it all the time, particular business analysts, but then they have a problem to communicate with the developer team because we don't want to hear about use cases, we want to hear about user stories. And this is what has gone on for now many years, 20 years or so. But we are going to change it.

Shane Hastie: Use cases definitely are... We are seeing a reassurgence, but I'm also seeing that there's something different about the way you and people like Alistair Coburn are talking about use cases today compared to what were. I will confess to having seen some pretty appalling use cases in my time.

One that springs to mind was an author who tried to write all of the functionality for a complete ERP system in a single use case. It ended up 127 pages and a week later even the author couldn't tell us what it was. Now that in my experience, is the extreme example of a really badly done use case.

Ivar Jacobson: Yes, absolutely. When I started with use cases, I developed one way of using use cases; we call it use case driven development. So it's not only for requirements. A use case was also something you realized. So for every use case we identified, we describe how to realize it as a collaboration among components with messages and so on. We use diagrams, very popular diagrams, they still popular, sequence diagrams. They are actually popular, but you describe how components interact to do something.

And then use cases with test cases. So once you understood the use case, you could direct it from a use case, identify a huge number of test cases, a big number of test cases. So use cases, we had a very big picture for them. Of course, we used them also only for requirements, but that was then simpler use cases. So we learned was that it can be used in so many different ways.

If you develop life critical systems, you have requirements on more details, level of detail. If you write for instance a business application or website or something like that. So it's not one use case that is needed for people about many different variants. Alistair, you mentioned Alistair, he came to Sweden and learned about use cases. He felt it can be done better as natural and of course it can be done better, so he started to work on a different way of doing use cases.

The value of that is that more people talked about it. He has definitely contributed a lot when it comes to popularizing use cases. He introduced a goal structure, so a goal has sub goals and sub goals have sub goals and so on, all the way down to very small things and many people like that and it's fine.

So you mentioned bad use of use cases and I've seen similar application of use cases that I've immediately said this is trying to do too much. People have loved the concept and tried to use it more than is proper.

So what we have done now recently, I have been thinking personally for a couple of years and worked with some people on what we called use cases need to come back. We had a plan for a campaign but I said no, this is not enough. We need to get the more people to stand behind it. So I saw an article of Alistair, it probably was a LinkedIn article where he said basically that use cases are needed and I contacted him and we decided to write the paper together.

So that's how we started now probably eight months ago or something like that. We wrote the paper together and writing papers together when you sit far away and don't have an immediate contact is not an easy thing, but it went very well. I had a colleague in my company who started off with a proposal, Alistair took it and said, "No, no, no, this is not how I want to write it." So he wrote a completely new, we decided that it's close enough. With some modifications, we had a paper and it was published in ACM Queue in September, October of last year.

So that was our first step towards doing something together.

Shane Hastie: What does a good use case look like today?

Ivar Jacobson: It depends. I'll tell you one thing it does, it follows, we could call it the foundation. So Alistair and I, we have written a paper together, together with actually Ian Spence who has been working with me for, I think. Almost 20 years and is really a use case expert. And we have written a paper together which is called Use Case Foundation. It includes a definition of use cases. I mean, just having a clear definition that we all can agree on what is a use case was a good thing.

Now, if you ask me to tell you what is use case, I have to think a little, but it is basically all the ways to use a system. It's very important. It's a system that owns the use case. So it's all the way to use the system to achieve a particular goal for a user.

Maybe very some word here that I have got wrong, but it's basically what it is. It's super critical. But there is an entity for which the use case belong. Use case is all the ways of using it. All the ways meaning if you do a particular use case, there are many different paths you can follow, success paths and alternative paths and failure paths. So all these ways to use the system to achieve the goal for a particular use or something like that. And that's a very good start. And then there is other things in this use case foundation document like principles that we think are important to obey and also some patterns.

It's not a thick document, I think it's four pages, but it gives a really good foundation. The idea is that people, not only Alistair and myself or my people, will use this foundation and then describe our different ways we can create use case practices, how we can practice use cases.

And Alistair is working on a book now and we also working on a kind of book, but it's not a thick book. It's a very thin book, maybe 50 pages, maybe something similar to the SCRUM guide when it comes to use cases. It'll describe a family of use case practice, not only one. So for instance, one practice we call use case storytelling, which is a very simple practice that works very well. For instance, web development, just to take an example. And then there is another called use case visualization or something like that. I don't know the names in detail. Well I don't know exact wording, but it's about visualization. This is where you can use diagrams. For instance, many typical products require much more detail about the use case that you need for developing a website. So they use, for instance, activity diagrams, they use swim lanes to describe what is done by the user and what's done by the system and sequence diagrams.

So a whole bunch of diagrams you can use. So these are the basic stuff. On top of that, you can have use case offering, which is more advanced than use case storytelling. And when it comes to the modeling side, you can have use case modeling and that means you make, for instance, a model of all your use cases and give basically a design of the use cases. Use cases are not design, it is a specification, it's requirements. But you can get very good complete picture of what the system is supposed to do by just looking at such a use case model.

And there are more and more advanced and we have six such practices we are just now working on and they should be presentable in a couple of weeks I will say.

Shane Hastie: So at a fundamental level, I suppose I want to explore what is different today about these use cases and the range that you're talking about to what we were working with when you first came up with these.

Ivar Jacobson: Yes. So in the first version, let's call it use case 1.0, we identified iterations and we started to work with the most critical paths in every use case. So we didn't take one use case specifying it completely at once, and then we went to design it and code it and test it. No, that is what the people who wanted to kill us said we did, but it's never true. We can go back to the original book from 1992 and it's very clear we develop in iterations and we don't develop complete use cases. So what we did is we took a path through use case. These paths could very well be described as a user story today or many user stories, such a path. We didn't have that term, but that is how we were thinking. Now people didn't feel like that. So in that sense, the Agile movement were right, people did waterfall, people did sitting and doing all the use case models.

And when we did a lot of modeling, we did too much modeling of the system. I mean I remember I worked with one bank, big bank in the US, they had 800 people sitting and doing models and then they threw these models over to the developers and the developers were then supposed to code them, but the developers said this is rubbish. So we did it their way anyway, so there was a complete total disconnect between these two big groups of people.

We tried to change it, but I think the whole organization dissolved and it was recreated without our chance to have an impact. So I mean they were using them wrong. Now, I cannot only blame them. We could have done more to describe it better. So in 2005 we wrote the paper called Use Case 2.0 where we integrated into use cases, something very similar to user stories.

We call them stories, but they were not identical. We had slices. So use case were sliced into a number of slices and you prioritize the slices and every such slice had a number of stories. So we are getting close, but the mistake, I must admit, I think today was a mistake we did like that. We should have kept use the stories outside use cases and we should have integrated it. Whatever way, people like to do user stories. Let them do user stories exactly as we are doing today.

We keep us to simpler use cases and then we have an integration mechanism. We show how to integrate use cases with user stories. And that could be a very simple pattern. So what we are doing now is we are actually doing just that. Our use cases are easily integrated with user stories but user stories all what people have come up with, other people like Mike Cohn has come up with and that is now concrete. So we will have this use case family separate from the user stories and they will be what they are in particularly Mike Cohn's world. So having said all this, maybe I leave it a little.

Shane Hastie: There's a lot there. We'll make sure that we have a link to the ACM article, but if people are looking for some more guidance, where do they go?

Ivar Jacobson: So that is the new thing here in each city. So Alistair and I, we wrote not only this use case foundation, we also wrote a call to action where we said basically use cases fill a hole that none of the other existing practices fill and that hole is an important hole to fill. Then we said, so we are going to do these things. We want to support doing these things.

Number one is to create the use case foundation, which I talked about. Number two was to create a number of initiatives where we integrate use cases with other elements such as user stories, story mapping, and many other practices that are out there and we identify how they can be integrated. I think we are talking about BDD and ATDD as well. So we are working with these people that are founders or have played a significant role in development of these practices.

For instance, BDD and ATDD is Ken Pugh and when it comes to user stories we work with Mike Cohn and also with Gunnar Overgaard who wrote a book Use Cases, patterns and Blueprints. So together this initiative, we develop how to integrate use cases with these other practices and that's going on now. We have identified two such initiatives, user stories and BDD and ATDD, and I expect a couple of others coming quite soon. And that work will be done without Alistair and I trying to play any kind of management role. It's not our style. So we'd see what come out of it, but apart from that we will see people developing their use cases, ideas on top of a foundation. So Alistair is doing that. We are doing that. As I said, we have six practices that we think we will publish in a short while, weeks.

Of course, Alistair is already out training in his new use cases. I don't know if I should call them new, but I believe it's at least refreshing of them and we will do the same thing. Hope to create an interest for them so people can learn about it. And I think we will serve different markets, different part of a market. Alistair has a populistic and still very useful style for doing it. His goal orientation attracts a lot of people. We do it our style, which is probably a little bit more scientific, but let's just guess. Who knows? Anyway, and there are others. I know there are many other people that want to do training and use cases and we will develop our own courses based on use case foundation. So it's all fun.

Shane Hastie: A lot's happening with use cases. What was old is new again.

Ivar Jacobson: New.

Shane Hastie: Yes. So Ivar, again, thank you very much. If people want to continue the conversation, where do they find you?

Ivar Jacobson: LinkedIn. Well I am not shy, so we can get my email address too.

Shane Hastie: All right, I'll make sure we have both of those links in the show notes. Thank you so much.

Ivar Jacobson: Thank you. I appreciate it.

Mentioned:

. From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Continue reading here:

The Evolution of Use Cases in Modern Software Engineering - InfoQ.com

Read More..