* This weblog provides an "online notebook" to provide comments on current events, interesting items I run across, and the occasional musing. It promotes no particular ideology. Remarks may be left on the site comment board; all sensible feedback is welcome.
* CHEESE EVOLVES: The microbiomes that produce different cheeses were discussed here last spring. An article in THE NEW YORK TIMES ("That Stinky Cheese Is a Result of Evolutionary Overdrive" by Carl Zimmer, 24 September 2015), took a closer look at the micro-organisms that define cheeses.
Dr. Ricardo C. Rodriguez de la Vega, an evolutionary biologist the French National Centre for Scientific Research in Paris, hunts cheeses. Every time he travels to a international conference or the like, he goes to a local cheese shop and asks: "Give me the wildest blue cheese you have."
He's not a fan of cheese as such, but of the fungi used to make it. Many cheeses require a particular species of mold to properly ripen. To produce Roquefort blue cheese, for example, cheese makers mix Penicillium roqueforti into fermenting curds. The mold grows through the cheese, giving it not only a distinctive blue color but also its taste.
To produce soft cheeses such as Camembert or Brie, on the other hand, cheese makers spray a different mold species, Penicillium camemberti, on the curds. The fungus spreads its tendrils over the developing cheese, eventually forming a rind; the rind of Camembert is nothing but a solid mat of mold. Along with defining the flavor, mold keeps cheese from spoiling by defending it from contaminating strains of fungi or bacteria.
Rodriguez de la Vega and his colleagues take their cheeses back to Paris, where the genomes of the cheese micro-organisms are sequenced. The researchers have found that cheese makers have, not all that deliberately, forced their fungi into evolutionary overdrive. The molds haven't simply gained new genetic mutations to help them grow better in cheese; over the past few centuries, they also have, without any specific prompting from humans, acquired large chunks of DNA from other species in order to thrive in their new habitat.
Cheeses have been around for thousands of years; cheese makers have often come up with new varieties by finding new molds. For example, in France the traditional method for making Roquefort cheese starts with bringing loaves of bread into caves. The Penicillium roqueforti fungus that grows on cave walls quickly attacks bread. The cheese makers retrieve the loaves, and break off bits of the loaves, along with the mold, to add to the curds.
It wasn't until the early 1900s that scientists began to identify the identities of the various species of molds in cheese. That wasn't just a matter of scientific curiosity; it then became possible for industrial cheese makers to select from a "library" of lab fungi to produce specific types of cheese in factories.
The modern species of molds used in cheese manufacture are very different from their ancestors -- most notably in that wild species of Penicillium mold typically feed on decaying plant matter, not milk. To trace out the evolutionary tree of this group of molds, Rodriguez de la Vega and his colleagues sequenced the genomes of ten species of Penicillium. Six of the species grow in milk, either to make cheese, or as contaminants; the other four are never found in cheese, including Penicillium rubens, the mold from which Alexander Fleming isolated the antibiotic penicillin in 1928.
The evolutionary tree assembled from the genomes showed how the molds genetically diverged from a common wild ancestor. That was what was expected, but the researchers were startled to find large chunks of DNA that, in the course of evolution, seemed to have come out of nowhere. It turned out that these sequences were effectively identical to those found in distantly related mold species -- or in other words, there had been "horizontal gene transfer" across a wide species gap.
Horizontal gene transfer was discovered over half a century ago in bacteria, and for decades it was generally associated with bacteria. In recent years, it has become apparent that it also takes place in animals, plants, and fungi. In 2014, for example, researchers found evidence that ferns acquired light-sensing genes 100 million years ago from another plant species known as hornworts. Cheese molds seem to be enthusiastic adopters of foreign DNA, with up to 5% of the entire genome of each mold the researchers sequenced made up of DNA from another species. The fact that these Penicillium genes remain effectively identical to those in the foreign species means they were acquired only a few centuries ago, not having had enough time to change through mutations.
The two biggest chunks of foreign DNA included Wallaby, with 250 genes; and CheesyTer, contains about 60. They are not found in any wild Penicillium strains. It turns out they help molds grow faster on cheese: CheesyTer, for example, features a gene that appears to allow molds to break down lactose, a form of sugar found in milk. But the gene also slows the molds' growth on a diet of simple sugar. Rodriguez de la Vega says: "We are selecting for things that are not good in the wild, but are good for us."
This is characteristic of the genetic changes associated with domestication. The chunks of foreign DNA may have other genes beneficial to cheese production. Tatiana Giraud, a member of the research team, believes that understanding the evolution of these molds could give cheese makers new ideas about how to produce new flavors. However, she also warns that if horizontal gene transfer has enhanced the molds that help make cheese, it could also enhance the molds that contaminate them: "You have to be cautious that it might spread to spoilers."COMMENT ON ARTICLE
* GMO LABELING BILL PASSED: As mentioned here early this year, the US House of Representatives produced a bill permitting the voluntary labeling of foods produced from genetically modified organisms (GMO). That bill didn't make it through the Senate -- but Senate then replied with their own bill, which that body passed by a vote of 63 to 30 on 8 July. Given that the Senate bill was better thought-out than the original House bill, the House quickly confirmed it on 13 July, with President Obama saying he would have no problems signing it into law.
The law was seen as a win for food companies, farm groups, and biotech firms, which had been pushing the Federal government to set a single national standard to head off a patchwork of state labeling laws, such as one that went into effect in Vermont on 1 July. The Federal legislation (S.764) blocks states from issuing mandatory labeling laws, and requires food manufacturers to use one of three different labels to inform consumers of the presence of GMOs in products. Manufacturers can comply by:
The USDA has two years to define what "GMOs" mean.
Many firms and agricultural researchers had opposed mandatory labeling, arguing it suggests GM ingredients may not be safe, pointing out that there no validated evidence they are harmful, and no basis for believing they are any more or less safe than new crop varieties obtained by more traditional means. Advocates of labeling replied, in effect, that was beside the point; consumers are entitled to know what they are buying.
The bill was the result of a deal crafted by Senators Pat Roberts (R-KS) and Debbie Stabenow (D-MI), the leaders of the Senate agriculture committee. Roberts had pushed the House voluntary bill, but it was shot down. Those voting against the Senate bill included 22 Democrats -- half the number of Democrats in the Senate, with crossovers from the Republicans.
Critics say the bill includes no penalties for companies that don't comply with the law. The critics are also worried that the bill's definitions of GMOs will exempt some ingredients from labeling requirements -- most notably, oils and sweeteners refined from GMOs but not containing GMO DNA. The absurdity of that complaint is that it is effectively impossible to show such products are in any significant way different, whether they were derived from GMOs or not, and any concerns over impurities are equivalent as well.
Finally, the critics are not happy with the idea that the only labeling required can be a QR code, requiring consumers to use a smartphone to read it. Considering the weakness of the case against GMOs, that seems like a fair compromise: those who are honestly worried about GMOs will be able to determine the status of a product, while the greater number who don't care won't be prejudiced against a product by an alarmist "CONTAINS GMOS" label. Those who want to promote their products as "GMO FREE" can put any label, big and bold as they like, on a product -- as long as the product really is, as per USDA regulations yet to be established, GMO free.
Another beauty of the use of QR codes is that, should those making a fuss about GMOs find something new to raise a fuss over, no labeling needs to be changed: the producers will simply update their website entries. In short, S.764 sets a precedent for future labeling controversies. The website can also provide clarification, as long as it's truthful, pointing out that there's no evidence or logic that GM crops pose a categorical threat. If manufacturers should be held to transparency, then how can they be denied making their own case -- particularly when the attacks on them are based on blatantly selective use of demonstrable facts?COMMENT ON ARTICLE
* DIGITAL INTELLIGENCE AGAINST TERROR (1): As discussed by an article from THE ECONOMIST ("The Terrorist In The Data", 28 November 2015), on the evening of 13 November 2015, islamic terrorists attacked the Bataclan theater in Paris, killing 89 people attending a rock concert. A mobile phone was found in a bin near the theater, with the final text message:
On est parti on commence (We're off, we're starting).
The phone proved valuable to French authorities, helping to locate a flat in Paris that was raided by armed police on 18 November -- with the presumed mastermind, Abdelhamid Abaaoud, and two others dying in the confrontation. Another phone linked an abandoned suicide vest to Salah Abdeslam, a plotter who escaped to Belgium, who became most wanted man in Europe -- to be finally apprehended in March 2016.
The digital data that pervades modern society -- communications data, credit-card records, security TV footage, and much else -- provided invaluable clues into the attack, and helped to hunt down the survivors of the terror ring. However, much of the data that could have helped forestall the attack was not available in time: French authorities complained that no European country had given warning that Abaaoud, who fled to Syria and was wanted by the Belgian police, had returned to France, even though he must have passed one or other European frontier, the tip-off having eventually come from Morocco. At least two attackers slipped into Europe via Greece, posing as refugees -- but police forces do not have routine access to the database of fingerprints for asylum-seekers.
The savage Paris terror attacks -- and more recent attacks -- pose a string of uncomfortable questions. Do Western intelligence agencies and police forces share information properly? Do they need to collect more data, and have greater powers to search it? Should data encryption be regulated? Europe has been more concerned with digital privacy than the US, but now the debate over how much privacy can be conceded has resurged.
The push for digital privacy had been accelerated by the revelations of Edward Snowden -- a fugitive contractor, now living in Moscow, for America's signals-intelligence outfit, the National Security Agency (NSA). He disclosed intelligence-gathering by America on its friends and foes alike. Snowden's admirers find him heroic; Western spooks see him as a traitor who violated his security oath and has, by provoking public hysteria, made hunting down terrorists and criminals much more difficult.
Different governments have different approaches to data collection, which can be roughly divided into "bulk collection" -- for example, digitally scanning through floods of metadata, such as the destinations of calls, in order to pick out patterns -- and "targeted surveillance" -- the more old-fashioned eavesdropping on the communications of a specific person or group. The US and Britain are notably big on bulk collection, in part because they are in a position to do so: the biggest internet firms are American, and some of the most important undersea fiber-optic cables run from the UK.
The US has an intelligence court, where judges must give warrants for surveillance that includes Americans' private data; the system is also overseen by well-staffed congressional committees. American privacy advocates find even this too weak, but they have not been able to come up with realistic alternatives. In Britain, matters are simpler: responsibility for approving eavesdropping rests with the home secretary. France gives its intelligence and security services an even looser leash, especially after the Charlie Hebdo killings on 7 January 2015, which amounted to a "rehearsal" for the November terror attack.
The slaughters in France almost certainly mean that Europe is going to tip further from privacy towards security. On 20 November, a week after the Paris attacks, European interior ministers agreed to push through a plan to share "Passenger Name Record (PNR)" data for all travellers to, from and within the European Union. That exercise has been held up in the European Parliament because of concerns about data privacy; opponents are now likely to be sidelined.
The ministers also agreed to exchange more information about fighters travelling to and from Syria; check biometric data of all EU citizens at the external borders of the Schengen free-travel zone; and link European and national police databases more effectively. [TO BE CONTINUED]COMMENT ON ARTICLE
* THE COLD WAR (122): Along with the aggressive games of the Soviets, President Eisenhower had to deal with being needled by the Democrats and their candidate, John F. Kennedy -- the theme of their election campaign being that the Eisenhower Administration had represented a certain inertia, complacency, stagnation, and that national policy needed to be revitalized. That was in part being taken in by Eisenhower's low-key way of doing things, but there was still a general perception, which would grow enormously during the new decade, that the old decade stood for backwardness and fossilization, and it was time for a fresh start. In his speech accepting the Democratic nomination on 15 July, JFK had spoken of a "New Frontier":
... As Winston Churchill said on taking office some twenty years ago: if we open a quarrel between the present and the past, we shall be in danger of losing the future. Today our concern must be with that future. For the world is changing. The old era is ending. The old ways will not do.
Abroad, the balance of power is shifting. There are new and more terrible weapons -- new and uncertain nations -- new pressures of population and deprivation. One-third of the world, it has been said, may be free -- but one-third is the victim of cruel repression -- and the other one-third is rocked by the pangs of poverty, hunger, and envy. More energy is released by the awakening of these new nations than by the fission of the atom itself.
Meanwhile, Communist influence has penetrated further into Asia, stood astride the Middle East and now festers some ninety miles off the coast of Florida ...
There has ... been a change -- a slippage -- in our intellectual and moral strength. Seven lean years of drouth and famine have withered a field of ideas ... It is a time, in short, for a new generation of leadership -- new men to cope with new problems and new opportunities.
All over the world, particularly in the newer nations, young men are coming to power--men who are not bound by the traditions of the past -- men who are not blinded by the old fears and hates and rivalries -- young men who can cast off the old slogans and delusions and suspicions ... we stand today on the edge of a New Frontier -- the frontier of the 1960s -- a frontier of unknown opportunities and perils -- a frontier of unfulfilled hopes and threats ...
... I tell you the New Frontier is here, whether we seek it or not. Beyond that frontier are the uncharted areas of science and space, unsolved problems of peace and war, unconquered pockets of ignorance and prejudice, unanswered questions of poverty and surplus ... I believe the times demand new invention, innovation, imagination, decision. I am asking each of you to be pioneers on that New Frontier ...
For the harsh facts of the matter are that we stand on this frontier at a turning point in history. We must prove all over again whether this nation -- or any nation so conceived -- can long endure -- whether our society -- with its freedom of choice, its breadth of opportunity, its range of alternatives -- can compete with the single-minded advance of the Communist system ...
... That is the question of the New Frontier. That is the choice our nation must make -- a choice that lies not merely between two men or two parties, but between the public interest and private comfort -- between national greatness and national decline -- between the fresh air of progress and the stale, dank atmosphere of "normalcy" -- between determined dedication and creeping mediocrity. All mankind waits upon our decision. A whole world looks to see what we will do. We cannot fail their trust, we cannot fail to try.
Kennedy's clear eloquence could not really conceal that the platforms of the two parties were well more alike than different, with both focused on carrying on "Cold War as usual", or even intensified. Along with the theatrics about the "missile gap", all the more annoying to Eisenhower because he knew perfectly well how absurd the claim was -- it didn't help that Nixon, trimming his sails to the winds, was also pushing for greater spending on defense in his campaign speeches -- the Democrats were calling for a national fallout shelter program, the Republicans trimming to that wind as well. The president didn't even try to humor such a contraption; that was talking billions of dollars in Federal outlay, and fallout shelters would do little to affect matters if it came to the staggering cataclysm of a nuclear shootout.
Eisenhower had no wish to see John F. Kennedy as president; JFK was too tainted by his father Joe Kennedy, well-known to the public for both his wealth and lack of scruples, with Eisenhower worrying that the result of a Kennedy election would be an administration in the form of a corrupt political "machine bigger than Tammany Hall ever was." Eisenhower was also not happy with Kennedy's choice of Lyndon Baines Johnson as running mate, judging LBJ a pure politician, full of hot air and opportunism, with no competence or judgement under the theatrics. [TO BE CONTINUED]COMMENT ON ARTICLE
* Space launches for June included:
-- 04 JUN 16 / GEO-IK 2 (COSMOS 2517) -- A Rockot Briz-KM booster was launched from Plesetsk Northern Cosmodrome in Russia at 1400 UTC (local time - 4) to put the "GEO-IK 2" AKA "Cosmos 2517" military geodetic studies spacecraft into near-polar low Earth orbit. The satellite had a launch mass of about 900 kilograms (2,000 pounds); it was intended to provide data to refine satellite tracking, global navigation, and guidance of long-range missiles.
-- 09 JUN 16 / INTELSAT 31 (DLA-2) -- A Proton M Breeze M booster was launched from Baikonur in Kazakhstan at 0710 UTC (local time - 6) to put the "Intelsat 31/DLA-2" geostationary comsat into orbit. The satellite was built by Space Systems / Loral, and was based on the new SSL-1300 "Epic" satellite bus. It had a launch mass of 6,450 kilograms (14,220 pounds), a payload of C / Ku-band transponders, and a design life of 15b years. The satellite was placed in the geostationary slot at 95 degrees west longitude; most of its capacity was leased to DirecTV Latin America to provide direct-to-home television broadcasts to Central America, South America and the Caribbean.
-- 11 JUN 16 / NROL 37 (USA 268) -- A Delta 4 booster was launched from Cape Canaveral at 1751 UTC (local time + 4) to put a classified payload for the US National Reconnaissance Office into orbit, the mission being designated "NROL 37". It was believed to be an Orion geostationary signals intelligence satellite, the ninth in the series since the 1980s. The Delta 4 Heavy booster featured three Common Booster Cores mounted together.
-- 12 JUN 16 / BEIDOU-2 G7 -- A Chinese Long March 3C booster was launched from Xichang at 1530 UTC (local time - 8) to put a "Beidou" navigation satellite into orbit. This was the 23rd Beidou payload launch; it was a second-generation Beidou satellite. The fully operational Beidou system will consist of 35 satellites in three types of orbits: geosynchronous orbit over the equator, a high-inclination geostationary orbit, and a high-inclination medium Earth orbit 21,500 kilometers (13,350 miles) above Earth. This space platform was the seventh of the series to be placed in geostationary orbit.
-- 15 JUN 16 / EUTELSAT 117 WEST B, ABS 2A -- A SpaceX Falcon 9 FT booster was launched from Cape Canaveral at 1429 UTC (local time + 4) to put the "Eutelsat 117 West B" and "ABS 2A" geostationary comsats into orbit. Both space platforms were built by Boeing Satellite Systems, being based on the BSS 702SP satellite bus, with electric propulsion systems for orbit raising, payloads of 48 Ku-band transponders, and design lives of 15 years. Both satellites weighed about two tonnes (4,400 pounds); ABS 2 was somewhat heavier, since it was on the bottom of the payload stack, and so had framing to support Eutelsat 117 West B on top of the stack.
Eutelsat 117 West B was placed in the geostationary slot at 116.8 degrees west longitude to provide Latin America with video, data, government, and mobile services for Paris-based Eutelsat. It also carried a "Wide Area Augmentation System (WAAS)" payload, WAAS being a system for long-range air navigation, based on correction of GPS signals. The WAAS payload was funded by the US Federal Aviation Administration.
ABS 2A was placed in the geostationary slot at 75 degrees east longitude to distribute direct-to-home television, mobile, and maritime communications services across Russia, India, the Middle East, Africa, Southeast Asia and the Indian Ocean region for Asia Broadcast Satellite of Bermuda & Hong Kong. The Falcon 9 main stage attempted a soft landing, but it was destroyed on impact.
-- 18 JUN 16 / ECHOSTAR 18, BRISAT -- An Ariane 5 ECA booster was launched from Kourou in French Guiana at 2138 UTC (local time + 3) to put the "EchoStar 18" and "BRIsat" geostationary comsats into orbit. Echostar 18 was built by Space Systems / Loral, being based on the SS/L 1300 comsat platform. It had a launch mass of 6,300 kilograms (13,890 pounds), and a design life of 15 years. EchoStar 18 was placed in the geostationary slot at 110 degrees west longitude to provide direct-to-home television broadcast services over North America for EchoStar and Dish Network. It was the first Dish Network comsat to provide coverage for Cuba.
BRISat was also built by Space Systems / Loral, being based on the SS/L 1300 comsat platform. It had a launch mass of 3,540 kilograms (7,805 pounds), a payload of 36 Ku-band / 9 C-band transponders, and a design life of 15 years. BRIsat was placed in the geostationary slot at 150.5 degrees east longitude to support banking services of BRI, a large Indonesian bank. The total mass of the two space platforms, factoring in the Sylda mating adapter, was 10,730 kilograms (23,655 pounds), making it the heaviest payload launched on an Ariane 5.
-- 22 JUN 16 / CARTOSAT 2C -- An ISRO Polar Satellite Launch Vehicle was launched from Sriharikota at 0356 UTC (local time - 5:30) to put the "Cartosat 2C" high-resolution Earth observation satellite into orbit. It had a launch weight of about 725 kilograms (1,600 pounds) and carried visible-range cameras, with a best resolution of 60 centimeters (2 feet), on a five-year mission.
It was the fifth spacecraft in the Cartosat series, and the third dedicated to military reconnaissance. Cartosat 2C followed the Cartosat 2A and 2B military satellites launched in 2008 and 2010 respectively. These were based on the 2007 Cartosat-2 civilian imaging satellite, which was in turn a successor to the 2005 Cartosat 1. Cartosat 2C went into a lower orbit than the earlier satellites, yielding improved views of strategic targets around the world.
The PSLV flew a total of 20 payloads, the others including:
This was the largest number of payloads ever flown on a single Indian rocket flight.
-- 24 JUN 16 / MUOS 5 -- An Atlas 5 booster was launched from Cape Canaveral at 1430 UTC (local time + 3) to put the fifth "Mobile User Objective System (MUOS)" geostationary military comsat into orbit for the US Navy. MUOS 5 was intended to provide narrowband tactical communications to significantly improve ground communications for US forces on the move.
MUOS 5 was built by Lockheed Martin and based on the A2100M satellite bus. The comsat had a launch mass of approximately 6,740 kilograms (14,860 pounds). Deployment of the MUOS constellation began with the launch of MUOS 1 in February 2012, with MUOS 2 following in July 2013, MUOS 3 being launched earlier in 2015, and MUOS 4 being launched in the summer of 2015. MUOS 4 completed the constellation, MUOS 5 being an on-orbit spare. A sixth satellite may be launched after 2018, with this space platform funded by international partners in exchange for access to the constellation.
The MUOS constellation replaces the seven "UHF Follow-On (UFO)" comsats launched between 1993 and 2003. The UFO comsats followed in turn from the FLTSATCOM spacecraft, lofted by Atlas-Centaur vehicles during the late 1970s and 1980s. UFO also served as a replacement for the five Leasat spacecraft operated by Hughes Communication Services for the US Navy. The rocket flew in the "551" configuration, with a 5-meter (16.4-foot) fairing, five solid rocket boosters and a single-engine Centaur upper stage.
-- 25 JUN 16 / LONG MARCH 7 MAIDEN FLIGHT -- A Long March 7 booster was launched at 1200 UTC (local time - 8) from the new Chinese Wenchang launch center on Hainan Island on the booster's maiden flight. It carried a boilerplate re-entry vehicle to test technologies for China's next-generation crewed spacecraft. It also carried four smallsats:
The booster's upper stage also carried an in-orbit refueling system experiment, which re-entered along with the upper stage. This was the first launch from Wenchang.
-- 29 JUN 16 / SHIJIAN 16-2 -- A Chinese Long March 4B booster was launched from Jiuquan at 0321 UTC (local time - 8) to put the second "Shijian 16" satellite into space; the first was launched in 2013. Shijian 16-2 was presumed to be a military signals intelligence satellite.COMMENT ON ARTICLE
* PRINTED BRIDGES: 3D printing is a growth industry these days. Although it has yet to become established as a consumer technology, industry is embracing it enthusiastically, finding it particularly useful for rapid prototyping and fabrication of high-value, low-volume parts. As discussed by an article from THE ECONOMIST ("A Bridge To The Future", 5 September 2015), research is now being conducted to develop 3D printers for creating structures, such as bridges or buildings.
MX3D, a Dutch startup firm spun off from a furniture-maker, is planning to use an "external" 3D printer to build a steel footbridge across a canal in Amsterdam, once all the permits have been worked out. The bridge will have a span of up to 15 meters (49 feet) and will be put together as a unified assembly, not pieced together from prefabricated sections.
At present, the most common way to print metal structures is "laser sintering" AKA "selective laser melting", which is performed inside a printing machine. The process spreads a layer of metal powder onto a base, then uses a high-power laser to fuse the particles into the shape for the first layer. The layer is then lowered, another layer of powder is laid down on top of it, to create the second layer; and so on, until the part is complete. The entire process is performed automatically, under software control.
The MX3D system takes a different approach. It used industrial robots to build structures additively. The robot arms are fitted with specially developed welding heads; they don't actually weld anything together, they just continuously lay down welds, effectively "drawing" out long rods of steel. The robots will either sit on the bridge as they fabricate it, or operate from a barge in the canal. It should take three months to build the bridge. The project is being supported by Autodesk, US maker of design and engineering software; Heijmans, a Dutch construction firm; and ABB, a Swiss-based maker of industrial robots.
The bridge will have a strong filigree-like structure that has been "optimized" by engineering software for the most efficient shape. The bridge, being built as a continuous structure, can be made much lighter than a bridge of similar capability, made from standardized beams and other components; the printed bridge tends to look "grown" instead of built, and in a way, it is.
MX3D engineers haven't decided on what kind of steel to use to build the bridge just yet; it could be standard steel, meaning it would need to be painted, or stainless steel that doesn't need to be painted. A "weathering" steel is also a possibility, featuring a mix of alloys that quickly forms a coating of brown oxide -- giving it a rusty appearance, but inhibiting further corrosion, meaning it doesn't need to be painted either.
The construction industry is already using 3D printed components for customized interior-decoration features, lighting effects and furniture. There's growing interest in moving on to printing large structural assemblies or even entire buildings, on the belief that printing will reduce construction cost and time, as well as permit greater flexibility in design. However, major obstacles remain, in particular developing construction materials and meeting building codes. Several collaborative projects are currently in progress:
Winsun, a Chinese company, has built a number of houses, including a five-storey apartment building, based on 3D-printed prefab assemblies. The scheme uses a six-meter (20-foot) high 3D printer to ooze a fast-drying paste made from a mixture of cement and recycled waste from construction sites. Under computer control, the machine deposits the paste layer-by-layer to create walls and other sections of the building. These parts are then joined together at the construction site, using steel reinforcing bars.
Ultimately, the goal is to print an entire building on site. A research team under Behrokh Khoshnevis at the University of Southern California is working on a scheme called "contour crafting", in which robots print an entire structure, obtaining materials from their immediate environment. The effort is being backed by the US National Aeronautics & Space Administration, the objective being to set up structures on the Moon, or elsewhere, using local materials. The system would make concrete of a sort from locally-obtained water and surface deposits, piping them out an extrusion nozzle, with a pair of automated trowels shaping the extruded material as desired. The European Space Agency is conducting similar research. As for when 3D-printed buildings will be common on Earth, nobody can say -- but it might not be that far off.COMMENT ON ARTICLE
* FADING OZONE HOLE: In 1985, it was discovered that the ozone layer over the Antarctic was thinning, resulting in a "hole" over the region. The problem was determined to due to human production of chlorofluorocarbons (CFCs), primarily as a refrigerant working fluid. CFCs were very stable, and migrated to the upper atmosphere, resulting in the breakdown of upper-atmosphere ozone. This presented a potential global hazard, since the ozone layer blocks damaging solar ultraviolet from reaching the ground.
That led to the 1987 Montreal Protocol, which dictated the phasing-out of CFCs. A team of researchers under Susan Soloman, an atmospheric chemist at the Massachusetts Institute of Technology (MIT), has now shown the Antarctic ozone hole is clearly shrinking.
Layers of depleted ozone open up over both poles just as winter gives way to spring. During the wintertime cold, nitric acid and water condense out of the atmosphere and form wispy clouds. The surfaces of the cloud particles host chemical reactions that release chlorine from CFCs. The chlorine, in turn, goes on to catalyze the destruction of ozone -- but only in the presence of light. That is why, over Antarctica, ozone loss doesn't get going in earnest until September, the beginning of the southern spring, when light returns to the pole. Peak losses are usually in October, and that is when researchers have typically taken stock of year-to-year changes in the hole.
Solomon and her colleagues found the healing trend was more apparent in the month of September. Using a combination of measurements from satellites, ground-based instruments, and weather balloons, her team found that, since 2000, the September hole has shrunk by 4 million square kilometers -- an area bigger than India.
Of course, there was the question of linking the shrinkage to the limitations on CFCs by the Montreal Protocol, and demonstrating it was not due to other causes. The researchers used a 3D atmospheric model to separate the effects of the chemicals from those of weather, which can affect ozone loss through winds and temperature; and volcanic eruptions, which deplete ozone by pumping sulfate particles into the upper atmosphere. The sulfate can play the same role as cloud particles, activating chlorine from CFCs. The model showed a record ozone hole appeared in October 2015 -- which happened, due to the eruption of of the Calbuco volcano in southern Chile six months earlier.
Earlier studies had showed shrinkage of the Antarctic ozone hole, but they were controversial; Solomon's study has been more widely accepted, though it has its critics. One issue is that only half the shrinkage was attributed to the results of the Montreal Protocol, with the other half apparently due to changed weather patterns; some believe that uncertainty in half the estimate reflects badly on the estimate as a whole. However, in the absence of being able to show where the study has gone wrong, such criticisms carry little weight.
Although the ozone hole isn't expected to disappear until mid-century at earliest, the study is seen as highly encouraging -- not merely to envision an end to the ozone problem, but also to suggest that the greater effort to address climate change may eventually pay off as well.COMMENT ON ARTICLE
* AI REVOLUTIION (5): The previous installment in this series discussed worries over artificial intelligence and creeping automation putting everyone out of a job. An article from THE ECONOMIST ("I'm Afraid I Can't Do That", 4 June 2016) made a more detailed case to show the fears are overblown.
There's nothing new about fears of automation. There were waves of such panics in the 1960s -- when firms first installed computers and robots -- and the 1980s -- when PCs landed on desks. Each time, in fact, technology ultimately created more jobs than it destroyed, as the automation of one chore increased demand for people to do the related tasks that were still beyond machines. When automatic teller machines were introduced, the number of cashiers in America actually rose, since the device helped to cut costs, enabling banks to open new branches.
Are the fears justified this time around? Some believe so. One widely cited paper by Carl Frey and Michael Osborne at Oxford University in the UK determined that as many as 47% of Americans work in jobs that will be highly vulnerable to automation over the next two decades. However, a paper by Melanie Arntz, Terry Gregory, and Ulrich Zierahn of the Centre for European Economic Research (CEER) replied with a more conservative view. The Oxford study quizzed experts on the chance that a particular occupation could be automated, and then tallied up the proportion of American workers in such jobs. The CEER study suggests that this approach was inaccurate.
By examining more detailed data, the CEER researchers found that many jobs involve sets of tasks, only some of which machines can easily handle. Take clerks in book-keeping, accounting, and auditing: the earlier study said the odds of computers supplanting them over the next 20 years were 98%. However, the newer study found that three-quarters of those jobs involve some group work or face-to-face interaction, which are not tasks easily automated. Applying a similar analysis to all jobs, they find that only 9%, not 47%, are at high risk of automation.
Some cautions are required:
Even at that, the two studies only looked at what is technically feasible, and didn't factor in that human workers might be more cost-effective in some circumstances than machines. Car manufacturer Renault-Nissan uses robots more intensively in Japan than in lower-wage India. Even if a wave of automation sweeps over the workforce, total employment may not fall. Innovation could lower prices and so stimulate incomes indirectly, boosting demand for new jobs elsewhere.
From the Luddites to Keynes, many have peddled dire visions mass technological unemployment, to find it hasn't happened so far. The classic example that advocates of automation never tire of pointing out is agriculture, which once providing a high proportion of employment in society. Now it's highly automated, only providing direct employment to a relative handful, while providing indirect employment to a big agribusiness industry -- as well as producing cheap food for a much larger population. Agricultural automation continues to advance, and few have any fears of it.
The transition, in any case, is likely to be slow. Cars with self-driving features are already available, and becoming more sophisticated over time, with full autonomy expected sometime in the next few decades. How long it will take robocars to take over completely from human drivers is anyone's guess, but few think it will be any time soon.
Nonetheless, the future workforce will need to be flexible, learning new skills through their working life. AI may be a big help; the prospect of every student having a personal robot tutor promises a revolution in education, though it will not be easily achieved. Social and character skills will also matter more.
Welfare systems will need to be updated, to smooth the transitions between jobs and to support workers while they pick up new skills. Countries might well learn from Denmark's "flexicurity" system, which lets firms hire and fire easily, while supporting unemployed workers as they retrain and look for new jobs. Benefits, pensions and health care should follow individual workers, instead of being tied to employers, to provide a framework of career security, instead of job security. So far, there's been little sign that industrial-era education and welfare systems are being modernized and made flexible. We are approaching the third decade of the 21st century; the leadership is only slowly realizing that we can't drive forward by watching the rear-view mirror.COMMENT ON ARTICLE
* THE COLD WAR (121): Eisenhower's trip to the Far East over, the president had to address ongoing problems, Cuba being at the top of the list. The US was ramping up a propaganda offensive against Castro -- as well as the Dominican Republic's Trujillo, the president having no liking for him either, and finding it convenient to tar both Castro and Trujillo with the same brush. There was one significant difference between the two cases: the possibility that Cuba would form a military alliance with the Soviet Union, raising the specter of a nuclear-armed Cuba threatening the US. Cuba was already obtaining East Bloc arms, claiming them to be for strictly defensive purposes. That was not too alarming in itself, but would it stop there? Eisenhower commented in a White House meeting on 29 June that the US "would not tolerate" any military alliance between the USSR and Cuba.
On 6 July, the president signed legislation that slashed the quota for Cuban sugar imports, and reducing it to zero for 1961, firing an initial volley of economic warfare at Castro. The Soviets agreed to buy the sugar instead, in exchange for Soviet crude oil.
Khrushchev was not concealing the friendship between the USSR and Cuba, with the premier making his typical loud threats, saying the Soviet Union would use its rockets to defend Cuba, but disclaiming any intention to put missiles on the island. That actually reassured Eisenhower, who didn't think the Soviets would do anything so provocative. The president also found Khrushchev's bluster useful: it antagonized the leadership of many Latin American countries, providing a useful counterweight to the anti-American rhetoric being peddled by Castro.
At a 7 July NSC meeting, Defense Secretary Gates outlined a series of options for dealing with Cuba, the most extreme being an outright invasion of the island. Although Nixon wanted visible action, Eisenhower was not at all enthusiastic about the overt use of force against Castro, separately telling Republican leaders that, in doing so, the US might "lose all of South America."
* Khrushchev then went back on Eisenhower's front burner. On 1 July, a Boeing ERB-47H electronic reconnaissance aircraft on a "ferret" mission over the Barents Sea, snooping for signals from Soviet radars and other radio emitters on the coast, simply disappeared along with its six crew, with a week's search turning up nothing. It wasn't until 11 July that Khrushchev started raising the roof, claiming that the aircraft had overflown Soviet airspace and been shot down. He denounced American "provocative actions" -- and also threatened the British, the aircraft having been based in the UK.
The ERB-47H had been in international airspace, there being no need for it to overfly Soviet territory to pick up the traces of Soviet emitters. Soviet fighter often "escorted" such snoopers, but for various reasons -- the high state of tensions, misunderstandings, simple spite -- the fighter shot the ERB-47H down. Two of the crew, USAF Captains Captain Freeman Olmstead and John McKone, had been picked up by a Soviet trawler; the body of Major Willard Palm, the aircraft commander, had been recovered, though the bodies of the other three crew would never be found. Olmstead and McKone were in Soviet custody, being treated as spies and run through the interrogation mill. Eisenhower was reassured that the aircraft had been in international airspace -- but could he believe it? Could he prove it?
The UN Security Council began discussion of the issue on 22 July, with the Soviets pushing for a resolution to condemn American overflights of Soviet territory. Lodge gave a presentation showing the flight track of the ERB-47H, showing it indeed was over international waters when it was shot down, and stating that the Soviet fighter had tried to force it to fly into Soviet airspace before blowing it out of the sky. Reconnaissance crews were under strict orders not to comply with such directives, regardless of the consequences.
Although Lodge couldn't give the details of how the aircraft had been tracked, saying it was through "secret devices" being carried on it, his argument was persuasive. The resolution was voted down on 26 July, Poland and the Soviet Union being the only Security Council members to vote in favor of it. That could have been no surprise, nor was it any surprise that the Soviets vetoed counter-resolutions from the US and Italy. The Soviets then began to push much the same resolution as they had presented to the Security Council through the UN General Assembly, but it would be overwhelmingly voted down a few months later.
The Soviets were not able to get any traction out of the incident; indeed, it did much to suggest to the world the USSR was warlike, brutish, and lawless. About a month after the shootdown, the Soviets returned the body of Major Palm. However, the two crewmen still in Soviet custody were not going to be released as long as Eisenhower was in office. [TO BE CONTINUED]COMMENT ON ARTICLE
* GIMMICKS & GADGETS: As discussed by a note from IEEE SPECTRUM Online ("Swiss Considering $3.4 Billion Cargo Tunnel for Automated Delivery Trucks" by Evan Ackerman, 1 Feb 2016), Switzerland is increasingly becoming gridbound by road traffic. Now an initiative named "Cargo Sous Terrain (Underground Cargo)" is proposing a logistical support network, based on underground tram lines of automated delivery carts -- all for $3.4 billion USD.
Cargo Sous Terrain has presented a feasibility study detailing the plan. The idea is to dig a 66.7-kilometer (41.4-mile) long, 6-meter (19.7-foot) wide tunnel 50 meters (164 feet) below ground. The thoroughfare would connect Zurich with logistics centers out to the west, south of Bern. The pilot tunnel would connect to four above-ground waystations that link the tunnel to cargo transfer points. The eventual goal would to expand the dedicated cargo network so that it connects Zurich with Lucerne, and eventually Geneva.
The tunnel itself would contain three lanes for autonomous, inductively powered electric carts, basically cargo containers on wheels. The trucks would zip along at 30 KPH (18.6 MPH); above them, there would be a separate monorail system whisking smaller packages from Point A to Point B at twice that speed. Power to run the underground parcel hauling system is supposed to come from renewables, including the energy collected by solar panels mounted on the roofs of the transfer stations.
The idea would be to reduce the size and increase the number of deliveries. While intriguing, it seems too ambitious: ambitious schemes that are only workable on a large scale are a hard target. Even if funding were secured, the earliest we could expect to see the tunnel operational is 2030. Still, without it, by that time Switzerland may well be in an era of permanent gridlock.
* As discussed by an entry from WIRED Online blogs ("How LEDs Are Making Weed Better" by Sarah Zhang, 15 October 2015), marijuana growers, used to raising their crop in the basement, have long been interested in the most effective technologies for indoor farming. They are now very enthusiastic about energy-efficient LEDs whose light can be tuned to optimize plant growth -- not only making the crop easier to grow, but also more potent.
There are lighting firms, such as Illumitex, that cater to the indoor horticulture market -- and given the growing decriminalization of marijuana, don't conceal that marijuana-growers are a significant component of the customer base. LEDs have become the preferred lighting technology for these customers; although they can cost twice as much up-front as the discharge lamps previously used by growers, they can use up to 60% less power, which also can mean less need to air-condition a growing space to get rid of the excess heat from lighting. Indoor farms tend to be energy hogs, with tales of marijuana growers being nailed by the authorities from their electric bills.
Development of magenta (blue-red) LEDs has been a particular benefit to growers, those being the two bands that plants need. Of course, not all growers raise pot; food plants grown on the International Space Station are grown under LEDs as well, as is the produce from FarmedHere, America's oldest indoor farm. FarmedHere switched from fluorescents to LEDs a few years back, with company officials now saying they $45,000 USD a year on energy bills.
* As discussed by an item from WIRED Online blogs ("ThingMaker Is for Kids, But You'll Want This 3-D Printer for Yourself" by Brian Barrett, 18 February 2016), toymaker Mattel is now getting into the 3D printing business, with a slick product package named "ThingMaker". It follows in the steps of a 1960s Mattel product named Thingmaker, which let kids fabricate bug-like Creepy Crawlers, mini-dragons, flowers and other small toys by pouring liquid plastic onto special molds, which were then heated up and cooled.
The new ThingMaker is far more sophisticated. It allows kids, or adults who like clever toys, to design and print parts that have a common snap-together interface, the software having been designed by Autodesk. Users can can work from an existing template or grow-their-own in the "ThingMaker Design" app, which is based on a simple drag-&-drop and provides 3-D visualization to show how the finished product will look. It can also import and print 3D items from other sources. Along with the usual snap-together constructs -- dinosaurs and the like -- it also has a snap-together jewelry section. Push a button, ThingMaker prints out the item. It won't be for sale until the fall; cost is expected to be about $300 USD.COMMENT ON ARTICLE
* THE LIFESPANS OF ANIMALS: As discussed by an article from AAAS SCIENCE ("Why We Outlive Our Pets" by David Grimm, 4 December 2015), when we watch our pets grow old, it's hard to think that, to them, we must seem like immortals, aging at a much slower rate. Humans may live well over a century, though the average lifespan is, at present, 71 years. Contrast with this with cats, which typically live no more than about 15 years, and dogs, that typically live no more than about 12 years -- though one Texan cat named Creme Puff lived to be 38, while an Australian cattle dog named Bluey lived to 29.
The big question, of course, is why we live so much longer than our pets do. There seems no fundamental physiological reason why humans live so much longer, at a biosystems level we're not categorically different from cats or dogs, or even mice; we simply have some sort of biological timer that's set longer. If we could understand the factors in aging, we might be able to live much longer healthy lives -- and extend the lives of our pets as well. Studies of the aging of pets are giving insight into those factors, with ideas being floated that may help explain everything from why small dogs live longer than big ones, to why cats tend to outlive dogs.
Daniel Promislow -- an evolutionary geneticist at the University of the University of Washington in Seattle, and co-leader Dog Aging Project, which aims to extend the canine life span -- says that figuring out how animals age is a "fascinating problem. It integrates behavior, reproduction, ecology, and evolution. If we can understand how to improve the quality and length of life, it's good for our pets and it's good for us. It's a win-win."
* The question of animal aging is nothing new. In 350 BCE, the Greek scholar Aristotle considered the matter, and concluded it had something to do with moisture: Elephants outlast mice because they contain more liquid and so take longer to dry up. Aristotle was right in concluding that bigger animals tend to live longer than smaller ones, but nobody thinks he was on the right track as to why. According to Steven Austad, a biogerontologist at the University of Alabama in Birmingham, it's not like there's been that much improvement on Aristotle: "All of the other hypotheses have fallen apart."
One idea that was popular in the last century was that animals with higher metabolic rates live shorter lives because they run out their body clock faster. The idea hasn't survived detailed examination: parrot hearts can beat up to 600 times per minute, for example, but they outlive by decades many creatures with slower heartbeats. Other notions include the idea that short-lived animals generate more tissue-damaging free radicals, or have cells that stop dividing sooner, but haven't been supported by the evidence.
Austad got into his field of study by an unorthodox path. He was a lion trainer in the 1970s, until one of the cats tore up his leg; he decided to get into a less hazardous line of work. By the mid-1980s, he was observing opossum behavior in Venezuela as a postdoc -- to notice just how fast they would age: "They'd go from being in great shape to having cataracts and muscle wasting in 3 months." That was interesting; what made it more interesting was that opossoms on a nearby island that lacked predators seemed to age more slowly and live longer. Austad suggests an evolutionary basis for the correlation between size and long lifetime that Aristotle observed: large animals tend to live longer, because they face fewer dangers.
Whales and elephants can afford to take their time growing, because as they get bigger, the likelihood that predators will take them down gets smaller. Yes, some predators emerge that can kill large prey, but they tend to die out if their prey population falls. Sabre-tooth cats appear to have evolved multiple times, to die out again and again. Little animals like mice, on the bottom of the food chain, tend to live on borrowed time, and so live out their lives in fast-forward. Mice only live to be a few years old, assuming they're not eaten first. Small animals are likely to be prey sooner or later, so there's no evolutionary motive for them to have a lifespan longer, on the average, than they would have in the absence of hazards.
There are exceptions to the size-longevity correlation; mole rats and bats, can live 30 and 40 years respectively, neither being as vulnerable to predators as mice. There's also an inversion with dogs, which have been selectively bred and largely protected from predation: large dogs like Irish wolfhounds might live to be seven years old at most, while little dogs can live a decade longer. Size in dogs is controlled by growth hormones such as "insulin-like growth factor 1", and such hormones may play a role in aging. Promislow also suggests that the dog frame isn't well-adapted to large size, with big dogs having more health problems than little ones -- though it's hard to sort such a general trend from the afflictions of inbreeding.
Even a century ago, cats and dogs didn't live nearly as long as they do. Dog life expectancy has doubled in the past four decades, and housecats now live twice as long as their feral counterparts. That's because they're being fed and cared for better, with Americans currently spending $60 billion USD on pampering their pets. Joao Pedro de Magalhaes -- a biogerontologist at the University of Liverpool in the UK who maintains AniAge, the world's biggest database of animal lifespans -- comments: "The same things that allow us to live longer also apply to our pets."
As problems of a comfortable life such as obesity become more common, longevity gains are running into diminishing returns. Research continues on the aging of dogs and cats, in hopes of establishing why different animals age at the rate they do. After all, we have more medical records on pets than we do on any animal except us, and we are learning more about their genomes and biology all the time. Maybe they will show us how to slow down aging? Magalhaes says: "I don't think there's a set max longevity for any species. The real question is: How far can we go? Maybe a thousand years from now, you could have a dog that lives 300 years."COMMENT ON ARTICLE
* TACKLING ANTIBIOTIC RESISTANCE: The problem of growing pathogen resistance to antibiotics has been discussed here in the past, last in 2011. More and more pathogens are acquiring resistance to antibiotics and other drugs used to treat them, and development of new drugs is not keeping up, leading to the prospect of a growing calamity.
As discussed by an article from AAAS SCIENCE NOW Online ("Guarantee Drug Companies A Profit To Develop New Antibiotics, UK Report Says" by Kai Kupferschmidt, 18 May 2016), a recent report commissioned by the British government has proposed solutions to the problem of "antimicrobial resistance (AMR)", major bullet items including:
The report was commissioned in 2014 by British Prime Minister David Cameron and the Wellcome Trust. The head of the research group that produced the report was economist Jim O'Neill, currently the commercial secretary to the treasury in the United Kingdom.
At the present time, an estimated 700,000 people die every year from drug-resistant infections -- though it should be noted that this estimate does include people who would have died even if their infections hadn't been resistant. By 2050, drug-resistant infections could kill more people than cancer, according to the new report. The report was not the first warning of the problem, but its sponsorship by the British government has given it a high profile with world leaders. It is expected to be presented to the United Nations General Assembly later in 2016.
The report makes a strong economic argument for taking on AMR. In addition to its health toll, by 2050 AMR could cost the world $100 trillion USD in lost economic output. In contrast, acting on the recommendations in the report -- such as supporting the development of new antibiotics and diagnostics, as well as strengthening surveillance -- could cost as little as $3 billion USD to $4 billion USD a year, about 0.05% of what the G20 countries spend on health care today.
The report calls for a two-tiered approach:
The report urges that some major antibiotics not be used in agriculture at all -- drugs are commonly, often indiscriminately, given to livestock to treat infections or boost growth. Certifying meat as raised with "responsible use" of antibiotics could allow consumers to drive down their use. Observers believe the agricultural sector will push back on this recommendation, and so there should be an effort to investigate alternative approaches to use of antibiotics in agriculture.
The report also offers suggestions on how to boost the development of drugs effective against resistant infections. Many pharmaceutical companies have abandoned antimicrobial drug development because it is not very profitable. Global sales of patented antibiotics are roughly $4.7 billion USD a year, which is about as much as a single top-selling cancer drug, the authors note. A "Global Innovation Fund" endowed with up to $2 billion USD is needed to fund early-stage research; a bonus of $1 billion for a company that develops a new drug effective against resistant infections could also help. The money could be raised as a levy from these companies through a "pay-or-play" strategy, in which companies can either pay up, or invest it in research and development to fight AMR.
Andrew Read -- an evolutionary biologist based at Pennsylvania State University in University Park, who studies AMR -- says the report is "a call to action, and we sure need that." However, while he believes curbing the use of unnecessary antibiotics is important, he thinks the bigger challenge is learning to use the drugs when needed in a way that doesn't select for resistant strains: "If evolutionary considerations became an essential component of medical best practice, we'd get immense gains -- even bigger than those that will come from rapid diagnostics and less agricultural use."COMMENT ON ARTICLE
* AI REVOLUTION (4): After decades of slow progress, artificial intelligence is now going from strength to strength; in doing so, does it really pose a threat to humanity? Few involved in AI research do not believe that it will be possible to built AI systems at least as smart as humans, in some ways possibly more so; might they ultimately replace us?
The threat, if threat there is, remains vague. Part of the problem, says Rodney Brooks, who was one of AI's pioneers and who now works at Rethink Robotics, a firm in Boston, is a confusion around the word "intelligence". Computers can now do some narrowly defined tasks which only human brains could manage in the past -- the original "computers", after all, were humans, usually women, employed in groups to do the sort of tricky arithmetic that digital sort of computers find trivially easy. An image classifier may be spookily accurate, but it has no goals and no motivations other than to complete its given task, and has not the least inclination to question its purpose in existence.
The simple reality is that we are building AI systems as servants, to perform certain classes of tasks, and have no reason to build them to work like humans except to the extent that it helps them perform such tasks. We have no reason to develop an AI that, as fully as possible, mimics a human, except as an exercise in curiosity. Yes, we will build AI systems that seem humanlike to us, and we will treat them as if they were human -- but they will still be servants, with no thought in their electronic minds except to serve.
AI systems are still a long ways from convincingly humanlike in any strong sense. A paper presented at a computer-vision conference showed optical illusions designed to fool image-recognition algorithms; humans, having more context and a general world-view, are much harder to trick. It is even possible to construct images that, to a human, look like meaningless television static, but which neural networks nevertheless confidently classify as real objects.
This is not to say that AI creates no worries. Yes, we will build AI systems as servants, but they will also be built as weapons -- of course, what technologies have we not used as weapons? More significantly, as AI systems keep on becoming more capable, they will increasingly take over tasks long reserved for humans. Who will need translators when machines can do perfectly good translations on their own? AI systems are increasingly moving into financial analysis and customer support. Many firms already use computers to answer telephones, for instance. For all their maddening limitations, and the need for human backup when they encounter a query they cannot understand, they are cheaper than human beings.
Still, consider the opportunities. Automated, cheap translation is surely useful, and having an untiring computer checking medical images would be as well. Possibly the best way to think about AI is to see it as simply the latest in a long line of cognitive enhancements that humans have invented to augment the abilities of the human brain. It is a high-tech relative of technologies like paper, which provides a portable, reliable memory, or the abacus, which aids mental arithmetic.
Just as the printing press put scribes out of business, effective AI will cost jobs. But it will enhance the abilities of those whose jobs it does not replace, giving everyone access to mental skills possessed at present by only a few. These days, anyone with a smartphone has the equivalent of a city-full of old-style human "computers" in his pocket, all of them working for nothing more than the cost of charging a battery. In the future, they might have translators or diagnosticians on call as well.
In the end, AI will enhance human beings, not replace them. There was a time when finding out information was laborious, requiring scouring the stacks at a library. Now, in the internet era, we have almost all information we need at our fingertips. Few see this is as a calamity, and would prefer to go back to the day of being in the dark. In a future generation, a citizen will be born with an AI in attendance, initially just collating information; but later becoming a teacher and a personal servant, coordinating the volumes of data needed to run our lives. Would we see this is as a frightening dependence? No, we would be frightened of the alternative, of trying to make our way in a digital world in the dark. [TO BE CONTINUED]COMMENT ON ARTICLE
* THE COLD WAR (119): Khrushchev would insist to the end of his life that he had done the right thing in Paris; that he had to stand up to the Americans; that if he hadn't pressed the issue to the hilt, he would have been judged a weakling. Possibly so, but Ambassador Thompson later said that Red Army generals, who rarely had much to say to him, told him they thought Khrushchev was overplaying his hand. While Mao Zedong was pleased with the way Soviet rapprochement with the West had been so noisily derailed, Mao was the last person Khrushchev wanted to please, and everyone else seemed appalled.
The premier had put on a performance that made him seem irrational and juvenile to the world, and announced that the Soviet Union did not want peace. It was a diplomatic disaster for the USSR; Khrushchev might well have been hanged for failing to react so strongly, but he gave himself something of a hanging by doing so. When he was asked by other Red leaders if he had made it clear to the Americans, in the face of earlier overflights, that such intrusions were unacceptable, he admitted he had not -- but weakly added that they would have simply gloated that he couldn't shoot them down. Krushchev later judged the U-2 shootdown as the event that began the decline in the credibility of his leadership, that it had shown he was unable to defend the Motherland. It seems more plausible that it underlined his lack of leadership skills.
The Soviet government continued to make a fuss about the U-2 overflights. In response to Soviet agitation in the UN, on 26 May 1960, US UN Ambassador Henry Cabot Lodge displayed a wooden plaque, representing the Great Seal of the United States, which had been given to US Ambassador to the USSR Averell Harriman in 1946. It wasn't until a security check in 1952 that it was found to contain a bugging device.
It was very ingenious; while the Americans sensibly checked the embassy for bugs, the one in the plaque had no external wiring, in fact no battery, being powered by a strong radio signal transmitted into the embassy. Lodge went on to say that over a hundred other bugs had been found in US missions and residences in the USSR and Eastern Europe. It wasn't a defense, but the Americans were no longer denying the overflights, instead just suggesting the Soviets "protested too much". They weren't done protesting either, walking out of the Geneva talks a month later, putting an end to them. Khrushchev had written off Eisenhower, banking on a new start with the next American presidential administration.
The U-2 shootdown was not only a political landmark, it also represented a turning point in military thinking. Up to that time, the push had been to build combat aircraft that could fly ever faster and higher, such as the B-70; but no matter how much aircraft pushed the envelope, they would not be able to escape the reach of missiles. Bomber tactics now took a shift to low-level tactics, coming in "under the radar" at subsonic speeds. Electronic countermeasures, to blind and spoof adversary radars, were nothing new -- but the race to develop better countermeasures, and find means to defeat those countermeasures, began to pick up steam.
* Khrushchev's reserves of anger and resentment hadn't been exhausted in Paris; there was easily enough left over for Mao and the Chinese. On 18 June, the premier announced he would attend the Third Congress of the Romanian Communist Party, which meant other Red leaders were forced to drop what they were doing and attend as well. Mao was an exception, since he didn't like to travel abroad -- and, as it turned out, he wouldn't have enjoyed it anyway. When Khrushchev address the congress, he defended his push towards peaceful coexistence with the West, rebutting Chinese criticisms, with the Soviet delegation also distributing a lengthy "Letter of Information" that carefully rebutted Chinese assertions.
Peng Zhen, the head of the Chinese delegation to the congress, replied with counter-rebuttals, and also distributed a scathing private message the Chinese had received from the Kremlin. Khrushchev exploded, engaging in an unscripted rant on the last day of the congress, personally attacking Mao -- for example, saying he got his Marxist theory "out his nose". Peng lashed back, replying that all Khrushchev's foreign policy amounted to was to blow hot and cold to the West.
Any impartial observer would have concluded much the same, but it was the last straw for Khrushchev; over the next month, he yanked Soviet advisors from China, while scrapping hundreds of Sino-Soviet contracts and cooperative arrangements. It was precipitous, the rift between the USSR and China now becoming too great to close. Given Mao's prickly and dismissive attitude, it's hard to believe the break wouldn't have happened sooner or later -- but as with the U-2 incident, there was no attempt to handle things in a graded fashion. Relations between the USSR and China would settle down later in the year, but the relaxation would prove strictly temporary.
* Back in the White House, Eisenhower's presidency had run out of steam. The president's hopes for an arms limitation agreement were in tatters, and he hadn't enough time in office to take on any new major initiatives. To the extent he went through the motions on arms limitation, it was entirely for propaganda purposes.
His trip to the USSR having been decisively derailed, Eisenhower took a tour of the Far East in June -- which was inconsequential, doing no more than reassuring American allies in the region that the US took them seriously, and proved discouraging when the Japanese government asked him to drop his visit there. Japanese communists were making a lot of public trouble over an upcoming Japan-US mutual defense pact, and Japanese authorities worried that they could not guarantee the president's safety. Eisenhower could not object, but he felt that the withdrawal of his invitation amounted to a communist win. Nonetheless, the defense pact was signed.
As Eisenhower was returning to the US, he was greeted by news of Lyndon Johnson publicly questioning the value of "personal diplomacy" and "good-will trips", Eisenhower privately calling Johnson a "smart aleck". Secretary of State Herter suggested the president not pay the rhetoric too much mind; it was just Lyndon Johnson ranting. [TO BE CONTINUED]COMMENT ON ARTICLE
* SCIENCE NOTES: In the latest report from the front lines in the war over genetically modified foods (GMF), a letter signed by 100 Nobel laureates -- including such bioscience luminaries as David Baltimore, Stanley Prusiner, and James Watson -- has specifically called out Greenpeace and others for their resistance to GMFs, stating:
Scientific and regulatory agencies around the world have repeatedly and consistently found crops and foods improved through biotechnology to be as safe as, if not safer than those derived from any other method of production. There has never been a single confirmed case of a negative health outcome for humans or animals from their consumption. Their environmental impacts have been shown repeatedly to be less damaging to the environment, and a boon to global biodiversity.
Greenpeace has spearheaded opposition to Golden Rice, which has the potential to reduce or eliminate much of the death and disease caused by a vitamin A deficiency (VAD), which has the greatest impact on the poorest people in Africa and Southeast Asia.
The letter got even less subtle than that, accusing Greenpeace and ETC of a "crime against humanity". That seems a bit over-the-top, but it hardly makes any difference, because the GMF wars are effective a dialogue of the deaf: GMF advocates know perfectly well GMF opponents have a selective view of the facts at best, and by that same coin have no credible responses to criticisms, instead simply parroting the same carp over and over again.
It has to be emphasized, as always, that nobody is claiming that GMFs are entitled to a free pass. It is simply that:
Food contamination -- a problem that is primarily due to food handling, and so affects "natural" foods just as much as any other -- is a far more demonstrable threat to the public than GMFs, but there is no great fuss about it: it happens every now and then, producers withdraw their product, and try to make amends for the hit on their reputation. If there is no categorical concern over food contamination, what sense does hysteria over GMFs make?
* As discussed by an article from AFP ("Plants Won't Boost Global Warming As Much As Feared" by Marlowe Hood, 16 March 2016), over the course of a year, land-based plants and soil microbes emit about 117 to 118 billion tonnes (gigatonnes / GT) of carbon into the atmosphere, six times as much as humans release by burning fossil fuels. At the same time, through photosynthesis, they soak up about 120 GT. The 2:3 GT surplus makes the terrestrial plant kingdom a "net sink" for CO2 that removes up to 30% of human-generated carbon pollution from the air.
However, at elevated temperatures, plants start to increase the amount of CO2 they emit relative to that they absorb. Early experiments had shown that leafy trees exposed to a temperature increase of 3:4 degrees Celsius (5.4:7.2 degrees Fahrenheit) would quickly begin to pump out an additional 20% of carbon dioxide or more.
Peter Reich of the University of Minnesota and colleagues decided to perform a practical investigation of what would actually happen with plants at elevated temperatures, setting up a heated environment in the wild in 2009 for some 1,200 trees that included the ten dominant North American temperate-zone species.
In an experiment, codenamed "B4Warmed", lasting five years, they kept temperatures at 3.4C (6.1F) above seasonal averages. To their surprise, the researchers discovered that -- over the long haul -- all 10 species acclimated to their new conditions. Carbon dioxide output increased by only 5%, much less than expected. Reich said that, though encouraging, it really didn't change the fundamentals of climate change: "The problem we created in the first place with our greenhouse gas emissions still exists." This is all the more true because of increasing global deforestation.
* Microbiome studies are all the rage in the sciences. Accordingly, as discussed by a note from AAAS SCIENCE NOW Online ("Earth's Microbes Get Their Own White House Initiative" by Kelly Kelly Servick, 13 May 2016), the Obama Administration has decided to push the exercise along.
Having launched efforts to map the human brain, fight drug-resistant bacteria, advance precision medicine, and cure cancer, in May the White House turned the focus onto the microbiome. The "National Microbiome Initiative", generated by the White House Office of Science and Technology Policy (OSTP), is set up to fund cross-disciplinary projects that would help understand the function of individual microbes in the microbiome, and map how they interact in communities -- from those that may fend off disease in the human intestines, to those that help plants pull nutrients from soil, to those that capture and release carbon dioxide in the ocean.
The initiative would allocate $121 million USD in Federal money, from funding already appropriated and included in the president's 2017 budget request, microbiome-focused research grants at NASA, the Department of Energy, the National Science Foundation, the National Institutes of Health, and the US Department of Agriculture. Private foundations, companies, and academic institutions have pledged another $400 million USD -- a quarter of that being from the Bill & Melinda Gates Foundation to study the effects of the microbiome on malnutrition, and find ways to manipulate soil microbes to improve crops in sub-Saharan Africa.
Microbiologist Jeff Miller of University of California in Los Angeles said the initiative's underlying goal should be to promote experiments testing cause and effect, not just showing tantalizing but inconclusive associations that have so far been typical of microbiome research. Miller -- one 17 researchers who helped inform OSTP by publishing a vision for a "Unified Microbiome Initiative" in 2015 -- commented: "We have incredibly interesting correlations between a certain type of bacterial community and obesity, or type 2 diabetes, or whether a plant is going to grow fast or not. We're generating hypotheses, but we've kind of lacked the tools to rigorously test them."
A White House fact sheet released along with the Microbiome Initiative announcement accordingly mentioned "tools" 17 times. What kind of tools? One, Miller says, might be a precise way to eliminate a single microbial species while leaving its neighbors untouched -- possibly with a targeted nanoparticle or the precise editing of a key gene. Another priority might be nanoscale imaging methods for observing groups of microbes without disrupting them.
A tool at the top of the wish list for microbial ecologist Janet Jansson at the Pacific Northwest National Laboratory in Richland, Washington, would be higher-throughput mass spectrometry -- a technology that allows researchers to sort through the proteins in a microbial sample. Jansson says that genetic sequencing "only gets you so far. If you want to understand more about the functions that are carried out in the communities, then it's desirable to know about the proteins that they are producing, and also the metabolites."
The White House initiative is more modest than it appears, amounting to an effort to coordinate research that's already in progress. However, it does provide more funding, and nobody in the field is complaining that the White House patronage is a problem. Indeed, by raising the public status of microbiome research, it's all for the good.COMMENT ON ARTICLE
* 5G REVOLUTION CONTINUED: The push towards fifth-generation (5G) wireless systems, discussed here last summer, is continuing to build up momentum, as clusters of discreet 5G antennas begin to spring up in Shanghai, Manhattan, London. Although the 5G revolution means a big outlay for new infrastructure, even as service providers are finishing off their investments in 4G infrastructure, they're willing to lay out the money for 5G. It's seen as the future: Rahim Tafazolli, director of the 5G Innovation Centre at the University of Surrey in the UK, says the goal of 5G is no less than to give users the "perception of infinite capacity", with a framework that will make the universal "internet of things (IoT)" a reality.
Overblown? Maybe -- Kester Mann of CCS Insight, a research firm, suggests it might be thought of as "a lot of hype". 5G technology is still a moving target: there isn't any consensus yet on what radio band and what technology standards are to be used. All the industry has right now is a set of rough "requirements", the most important being speeds of up to 10 gigabits per second (GBPS), and conversation latencies of less than a millisecond.
Nonetheless, South Korea and Japan, both leaders in wired broadband, are pushing forward on 5G wireless. AT&T and Verizon both invested early in 4G, and want like to lead again with 5G. The market for network equipment has peaked, as recent results from Ericsson and Nokia show, so the makers also need a new generation of products and new groups of customers. The demand is there, with wireless data traffic continuing steady growth, one estimate suggesting that networks will have to be ready for a thousandfold increase in data volumes in the first half of the 2020s.
There's still a lot to hammered out. Media companies are pushing for more bandwidth; IoT firms don't need a lot of bandwidth, but they want nodes with low power consumption. Online-gaming firms will insist on low latency. Big players like Apple, IBM, Samsung, & Google are working to stake out their turf. In 2014, Google bought Alpental Technologies, a startup which was developing a cheap, high-speed communications service using extremely high radio frequencies, known as "millimeter-wave (MMW)", in the spectrum bands above 3 gigahertz where most of 5G is expected to reside.
Questions over spectrum may be the easiest to resolve, in part because the World Radiocommunication Conference, established by international treaty, will settle them. Its last gathering, in November 2015, failed to agree on the frequencies for 5G, but it is expected to do so when it next meets in 2019. It is likely to carve out space in the MMW bands. Nobody is expecting the bitter shootout that took place in 4G between LTE, now the standard, and WiMax, backed by Intel; but right now 5G technology is a muddle.
5G advocates are worried that ever-more-pervasive wi-fi will undermine the 5G push. Among 5G advocates, there's a contest between those who think that 4G can be supercharged up to 5G standards, and those who want to start from scratch. On 11 February, for instance, Qualcomm, a chip-design firm, introduced the world's first 4G chip set that allows for data-transmission speeds of up to 1 GBPS, using a technique called "carrier aggregation", which means it can combine up to ten wireless data streams of 100 megabits per second. 5G revolutionaries instead envision phones that give up the traditional interlink from phone to cell tower.
One of the most outspoken representatives of the revolutionary camp is China Mobile. For Chih-Lin I, its chief scientist, wireless networks, as currently designed, are no longer sustainable. Antennas are using ever more energy to push each extra megabit through the air. Her firm's position, she says, is based on necessity: as the world's biggest carrier, with 1.1 million 4G base stations and 825 million subscribers -- more than all the European operators put together -- any problems with the current network architecture are magnified by the firm's scale. Sceptics suspect there may be an "industrial agenda" at work, that favors Chinese equipment-makers and lowers the patent royalties these have to pay. The more different 5G is from 4G, the higher the chances that China can make its own intellectual property part of the standard.
Whatever the underlying reality, I's vision of 5G networks is widely shared. They will not only be "super fast", she says, but "green and soft", meaning much less energy-hungry and entirely controlled by software. As with computer systems before them, much of a network's specialised hardware, such as the processor units that sit alongside each cell tower, will become "virtualized" -- that is, it will be replaced with software, making it far easier to reconfigure. Wireless networks will become a bit like computing in the online "cloud", and in some senses will merge with it, using the same off-the-shelf hardware.
Advocates of the 5G revolution envision an end to the tyranny of restrictive phone subscription rates, with phone service providers reduced to the status of providing "pipes" for which they can add little value. Value would then be provided by those who generate services. There's little doubt it means a shakeout; within a generation, the era of whimsical service contracts, and of eyesore cellphone towers, will be forgotten.COMMENT ON ARTICLE
* THE METHANE CONUNDRUM: As discussed by an article from WIRED Online blogs ("The US Is Finally Taking on Methane, Climate Change's Hidden Villain" by Emma Foehringer Merchant, 2 May 2016), while carbon dioxide has been tagged as the villain in climate change, methane has tended to be seen as a petty minion -- despite the fact that it is somewhere in the range of one to two orders of magnitude more potent as a greenhouse gas than CO2. Yes, methane does break down into CO2 and water over a period of decades, but in the meantime it is a significant contributor to global warming.
However, in March, US President Obama and Canadian Prime Minister Justin Justin Trudeau announced that the two countries would collaborate to cut methane emissions by 40% to 45% percent by 2025, and to regulate emissions from existing oil and gas operations, which account for a large proportion of methane leaks. In April, Gina McCarthy, the chief administrator of the US Environmental Protection Agency (EPA), declared tackling methane emissions as a top priority for the agency in 2016, with the agency defining regulations to limit methane emissions from new oil and gas wells.
The EPA is still trying to come to grips with regulating the oil and gas infrastructure that already exists, and which is projected to account for 90% of projected methane emissions in 2018. However, the rules for new operations will represent a significant step forward; while oil and gas drilling and exploration release the majority of methane pollution in this country, until recently Federal methane regulation had mostly been either voluntary, or tied to other air standards.
The new EPA rules are coming at the right time politically, in the wake of a furor over a ruptured natural gas well at Aliso Canyon, in the Porter Ranch section of Los Angeles, which produced the largest methane leak in American history. However, even before the leak, nearly 70% of registered voters said they favored the EPA's proposed methane rules -- a startling consensus, given the nation's fractious political climate.
The natural gas industry produces the majority of US methane emission. In 2014, natural gas systems released 176 million tonnes, nearly a quarter of total emissions. The second largest source was cattle digestion, which accounted for 22.5% of emissions. Petroleum systems contributed 9.3%. Natural gas has been promoted as a "bridge" from the traditional hydrocarbon-fuel economy of coal and oil to the emerging low-emissions society -- but methane leaks, if unchecked, could undermine the benefits of natural gas.
Each year the oil and gas industry loses nearly 10 million tonnes of methane during production, processing, and transport; as things stand now, it's going to get worse, as the US becomes more reliant on natural gas. Capping the leaks presents a tremendous opportunity to put a brake on climate change -- but will require collaboration between groups that don't always get along well: government, industry, and environmental activists.
Oil and gas industry officials worry that they may be saddled with regulations that will make doing business much more difficult -- while "fractivists" continue to lobby state and Federal players to tighten regulations or establish moratoriums. However, progress is being made in industry-government collaboration. On 30 March 2016, the EPA announced a voluntary initiative that sets up a five-year time frame for companies to "make and track ambitious commitments to reduce methane emissions." Among the founding members of this Methane Challenge Program were the country's largest electric company, Duke Energy, and SoCal gas, the company in charge of the well that blew out in Porter Ranch, California.
Industry does have a motive to cap methane leaks: leaks mean less natural gas they can sell, and less profit. The only question is how much it will cost the industry to cut back on leaks. The consulting firm ICF International finds that oil and gas companies could cut methane emissions by 40% percent by spending less than one cent per 28 cubic meters (1,000 cubic feet) of natural gas. Not everyone in industry has been impressed by such estimates; leak detection and monitoring equipment is expensive, too much so for small firms. The costs of monitoring have to come down.
There is a general consensus that leaks can be greatly reduced if the will is there. A 2015 working paper from the World Resources Institute, a research organization focused on natural resource management, recommended common-sense measures like annual maintenance along transmission lines to ensure equipment seals are solid. Devices that regulate the temperature, pressure, and flow of natural gas account for nearly another third of methane emissions. Infrared cameras have proven very useful for identifying methane leaks. Exactly what measures will be specified in the new EPA regulations, and how strongly they will be enforced, remains to be seen.COMMENT ON ARTICLE
* AI REVOLUTION (3): Beyond more-or-less conveniently labeled data, the internet also includes volumes of data that isn't so accessible. For this reason, a race is on to develop "unsupervised-learning" algorithms, which can learn without the need for human help.
There has been substantial progress. In 2012, a team at Google led by Andrew Ng handed an unsupervised-learning machine millions of YouTube video images. The machine learned to categorize common things it saw, including human faces and cats -- cats being particularly common residents of the online world, particularly in YouTube videos. No human had tagged the videos as containing "faces" or "cats", but , after seeing endless of examples of each, the machine had simply decided that the statistical patterns they represented were common enough to make into categories of objects.
The next step up from recognizing individual objects is to recognize lots of different ones. A paper published by Andrej Karpathy and Li Fei-Fei at Stanford University describes a computer-vision system that is able to label specific parts of a given picture. Show it a breakfast table, for instance, and it will identify the fork, banana slices, the cup of coffee, the flowers on the table and the table itself. It will even generate descriptions, in natural English, of the scene, though it is still not perfectly accurate.
Big internet firms such as Google are interested in this kind of work because it means potential profits. Better image classifiers should improve the ability of search engines to find what their users are looking for. In the longer run, the technology could find other, more transformative uses. Being able to break down and interpret a scene would be useful for robotics researchers, for instance, helping their creations -- from industrial assistants to self-driving cars to battlefield robots -- to navigate the cluttered real world.
Image classification is also an enabling technology for "augmented reality", in which wearable computers, such as Google's Glass or Microsoft's HoloLens, overlay useful information on top of the real world. Enlitic, a firm based in San Francisco, hopes to employ image recognition to analyze X-rays and MRI scans, looking for problems that human doctors might miss.
Deep learning is not restricted to images, either. It is a general-purpose approach to pattern recognition, which means, in principle, that any activity which has access to large amounts of data -- from running an insurance business to research into genetics -- might find it useful. At a recent competition held at CERN in Geneva, deep-learning algorithms did a better job of spotting the signatures of subatomic particles than the software written by physicists -- even though the programmers who created these algorithms had no particular knowledge of physics.
Machine translation, too, will be improved by deep learning. It already uses neural networks, benefiting from the large quantity of text available online in multiple languages. Andrew Ng, now at Baidu, thinks good speech-recognition programs running on smartphones could bring the internet to many people in China who are illiterate, and so struggle with ordinary computers. At the moment, 10% of the firm's searches are conducted by voice. He believes that could rise to 50% by 2020.
Such different sorts of AI can be linked together to form an even more capable system. In May 2014, for instance, at a conference in California, Microsoft demonstrated a computer program capable of real-time translation of spoken language. The firm had one of its researchers speak, in English, to a colleague in Germany; the colleague heard the researcher speaking in German. One AI program decoded sound waves into English phrases. Another translated those phrases from English into German, and a third rendered them into German speech. The firm hopes, one day, to build the technology into Skype, its internet-telephony service. [TO BE CONTINUED]COMMENT ON ARTICLE
* ANOTHER MONTH: At the risk of adding fuel to the ongoing global frenzy over Britain's vote to leave the EU, THE GUARDIAN reports that, in response to the Brexit vote, US Secretary of State John Kerry jetted off to Europe to take emergency action -- it appears the Obama Administration expected the Brexit vote to fail, and was caught flat-footed when it didn't.
What makes that interesting is that Kerry voiced doubts that it was even possible for Britain to leave the EU. Kerry said British Prime Minister David Cameron was entirely reluctant to invoke article 50, the EU exit procedure. Kerry said Cameron felt powerless to "start negotiating a thing that he doesn't believe in" and "has no idea how he would do it". The punchline came when Kerry added -- it seems with London Mayor Boris Johnson in mind, the mayor having been the loudest voice for Brexit: "And by the way, nor do most of the people who voted to do it."
If the British government invokes article 50, it will begin two years of negotiations on an EU withdrawal treaty. EU officials have described the process is "irreversible" once begun, but legal experts have told the House of Lords that a country could change its mind, if with "substantial political consequences". The article 50 talks would cover Britain's EU exit, including the status of EU nationals living in the UK and Britons on the continent. A trade deal would be negotiated separately, a process that insiders think would take anywhere from 5 to 10 years. The EU side insists Britain cannot have a trade deal until the article 50 divorce is signed and sealed.
German Chancellor Angela Merkel, the most significant player in the EU, is counseling patience with Britain -- the mindset being that the UK hasn't betrayed the EU, but has fallen into convulsions due to factors beyond the control of responsible British leadership. Merkel has a reputation for being low-key and plodding, but even her detractors say that's exactly the mindset demanded by the situation.
It is difficult to see from a distance exactly what the Brexiteers thought they were doing, but astonishingly, it is becoming clearer they were simply blowing smoke about EU "tyranny", and hadn't considered things beyond that. Jean-Claude Juncker, the president of the European commission, commented: "What I don't understand is that those who want to leave are totally unable to tell us what they want. I thought that if you wanted to leave you had a plan ... they don't have it."
Exactly what happens next, who can say? The current British government has fallen, so the decisions will have to be made by the next one. The jockeying for who will be the next PM is now on in earnest, though Boris Johnson has pulled his hat out of the ring; Johnson found out that prominent Tories who he hoped would back him for the job were after it themselves. In any case, it is obvious the new leadership that pushed for Brexit would judge retracting the decision political suicide -- but it is equally obvious that the political pressure against invoking article 50 will be so great as to make it very possible a government that does so will fall in its turn.
In short, the war over Brexit has only just begun. It does not seem a good bet that Brexit will be called off, but even if not, the end result will be a drawn-out nightmare that will discourage any further attempts to leave the EU, and even strengthen the union.
* My set of MP3 music players is aging and in need of replacement, so I just bought a SanDisk Clip Jam player. It's a nice item, but somehow I got the menus set to indic or thai script -- something like that, the bottom line being that I couldn't figure out how to navigate the menus to reset it to English.
After fumbling for some time, I downloaded the PDF manual for the Clip Jam. There was ONE English item in the menu, and I finally used that as a reference to find the language menu. I was saved. Having done that, however, I was curious about how much trouble I would have with other languages -- first changing it to French, since I can puzzle out simple French and didn't think I'd get lost with it. Right, no problem, it's "Parametres Systeme" and "Langue".
Once I found the right entry, it was easy to change languages back and forth, so I puzzled around with other languages. Spanish? "Idioma." Deutsche? "Sprache" -- of course, easy. Dutch? "Taal" -- ah, no, I don't want to deal with that. Incidentally, although I was encouraged to learn about the USB-C standard, with its small non-polarized connector, my hopes of the end of the era of balkanized USB connector standards do not seem likely to be fulfilled. The Clip Jam, and some other kit I've purchased, come with a new "low profile" USB connector, demanding yet another cable for recharging. It appears that this connector "standard" was established for phones.
* And, in other petty home technical challenges, I uploaded a new document to AIR VECTORS, and tried to dump a file list using FTP via my web browser into the website, so I could check for file count discrepancies. However, though I'd done that many times before, this time it wouldn't work. After a fair amount of fumbling, and being tipped off by the entries for FTP in the site admin control panel, I finally figured out that, instead of using:
-- as I had been, I had to use:
Why the hosting service got nitpicky with me is a puzzle, but not one worth further concern. Anyway, I got the AIR VECTORS file list downloaded into my PC -- to find out that I had more files than I should have, compared to the long-winded spreadsheet I use to keep track of the numbers.
My heart sank, since it can be very time-consuming to figure out, from the thousands of files on the websites, which ones are bogus. However, after fumbling a bit, I suddenly realized that the both the file list and the spreadsheet listing were in the same alphabetical order. I could simply use bisection to narrow down where the discrepancy was:
It proved highly efficient, and I corrected the problem. I rationalized my spreadsheet to make it easier to tally up file counts.
What was interesting was that I had been tracking down such file discrepancies for years, and hadn't thought of the simple, in hindsight obvious, way of doing it. Our brain is wired to look for connections, to the point where we often make false ones. We are inclined to make much of the flashes of insight we have; it is a more sobering to realize that we may fail for years to make connections that we should have seen right away: "Oh, how could I have missed that?!"
* Thanks to one reader for a donation to support the websites last month. That is very much appreciated.COMMENT ON ARTICLE