Google+  Facebook  Twitter  Instagram  LinkedIn

UC Santa Barbara
Department of Geography
UC Santa Barbara
Department of Geography
News & Events Department News Events Calendar Event Photos News Favorites News Archive Colloquia Archive
If you would like to be on our mailing list of latest news postings which goes out about every two weeks, or if you have anything relating to the Department that you consider noteworthy, interesting, or just plain fun that you would like to share, please contact the news editor at


UC Santa Barbara Geography / News & Events / Department News

August 26, 2015 - UCSB Listed Number 14 in Latest Ranking of Top Universities

"UCSB is listed among the top national universities in Washington Monthly’s 2015 rankings; the campus also is lauded as an ‘Access Improver’ for low-income students." The following article was written by Andrea Estrada for The UCSB Current and was posted on August 24, 2015 with the title “In the Top Echelon”:

UC Santa Barbara has moved up a notch in Washington Monthly magazine’s annual National Universities Rankings. Continuing its upward trajectory, UCSB is ranked number 14 on the 2015 list, which appears in the magazine’s September/October issue. The campus came in at number 15 in last year’s rankings and number 22 in 2013.

In addition, UCSB is listed at number 17 in the magazine’s “Best Bang for the Buck” rankings in the Western Schools category. The university also is highlighted in the magazine’s College Guide as one of 10 “Access Improvers,” colleges and universities that have increased their enrollments of federally funded Pell Grant students while maintaining strong student outcomes.

“The University of California, Santa Barbara, for example, is in the top echelon of its state’s universities, serving students of variable income and ability,” wrote Mamie Voight, director of policy research at the Institute for Higher Education, and Colleen Campbell, a senior policy analyst at the Association of Community College Trustees. “Yet 38 percent of Santa Barbara students are low income, compared to only 15 percent at Penn State, and Santa Barbara charges low-income students about half as much.”

While U.S. News & World Report usually awards its highest ratings to private universities, the editors of Washington Monthly prefer to give public universities more credit and higher rankings. Fifteen of the top 20 universities in the Washington Monthly rankings are taxpayer-funded.

Among the criteria considered by Washington Monthly are the percentage of students receiving Pell Grants, the difference between predicted and actual graduation rates, total research spending, Peace Corps service by graduates, community service participation, faculty awards, and faculty members elected to national academies.

Regarding the Best Bang for the Buck rankings, the magazine’s editors describe it as their “exclusive list of the colleges in America that do the best job of helping non-wealthy students attain marketable degrees at affordable prices.” Of the 1,540 colleges and universities in the broader rankings, only 386 qualified as Best Bang for the Buck schools. And of those, UCSB landed in the top 20. More information, including the complete rankings, is available at Washington Monthly College Guide.

Image 1 for article titled "UCSB Listed Number 14 in Latest Ranking of Top Universities"
UCSB Chancellor Henry T. Yang. Washington Monthly photo; photo credit: George Foulsham
Image 2 for article titled "UCSB Listed Number 14 in Latest Ranking of Top Universities"
The Department of Geography is doing its part in such ratings! In its latest (2010) Assessment of Research-Doctorate Programs, The National Research Council used a new methodology designed to show the full complexity of the data and the difficulty of giving a unique ranking, and each program and department was given a range of possible rankings, depending on how much weight was given to the different components used in the ranking. Accordingly, our Graduate Division rated Geography as number 2 in the nation, a ranking based on a "sort by S-weight 5th, then 95th percentile rank," while Cornell University concluded that UCSB Geography is number 1, using a ranking "sorted by R Mid." In the same vein, the Chronicle of Higher Education ranked us number two in the nation on its "Top Research Universities Faculty Scholarly Productivity Index" in 2008, phds.org ranked us the number 1 "large, prestigious program" among Departments of Geography in the USA" in 2009, and Geographical Perspectives ranked us the number 1 program for spatial careers in 2013, 2014, and 2015.

August 22, 2015 - The Lawn Is the Largest Irrigated Crop in the USA

“Each year, we drench our lawns with enough water to fill the Chesapeake Bay! That makes grass – not corn – America’s largest irrigated crop. Our nation’s lawns now cover an area larger than New York State, and, each year, we use about 2.4 million metric tons of fertilizer just to maintain them. When there is too much fertilizer on our lawns, essential nutrients are easily washed away by sprinklers and rainstorms. When these nutrients enter storm drains and water bodies, they often become one of the most harmful sources of water pollution in the United States” (source).

The following is a study from The International Society for Photogrammetry and Remote Sensing (ISPRS):

ABSTRACT: Lawns are ubiquitous in the American urban landscapes. However, little is known about their impact on the carbon and water cycles at the national level. The limited information on the total extent and spatial distribution of these ecosystems and the variability in management practices are the major factors complicating this assessment. In this study, relating turf grass area to fractional impervious surface area, it was estimated that potentially 163,812 km2 (± 35,850 km2) of land are cultivated with some form of lawn in the continental United States, an area three times larger than that of any irrigated crop. Using the Biome-BGC ecosystem process model, the growth of turf grasses was modelled for 865 sites across the 48 conterminous states under different management scenarios, including either removal or recycling of the grass clippings, different nitrogen fertilization rates and two alternative water irrigation practices. The results indicate that well-watered and fertilized turf grasses act as a carbon sink, even assuming removal and bagging of the grass clippings after mowing. The potential soil carbon accumulation that could derive from the total surface under turf (up to 25.7 Tg of C/yr with the simulated scenarios) would require up to 695 to 900 liters of water per person per day, depending on the modeled water irrigation practices, and a cost in carbon emissions due to fertilization and operation of mowing equipment ranging from 15 to 35% of the sequestration.

CONCLUSIONS: In this study we mapped the total surface of turf grasses in the continental U.S. and simulated its water use and C sequestration potential under different management practices for irrigation, fertilization and fate of the clippings. Rather than trying to accurately quantify the existing fluxes, we simulated scenarios in which the entire surface was to be managed like a well-maintained lawn, a thick green carpet of turf grasses, watered, fertilized and kept regularly mown. The accuracy of the results is therefore limited by both the uncertainty in the mapping of the total lawn area and by the simplifying assumptions made while modeling turf grasses growth. The analysis indicates that turf grasses, occupying about 2% of the surface of the continental U.S., would be the single largest irrigated crop in the country. The scenarios described in this study also indicate that a well-maintained lawn is a C sequestering system, although the positive C balance discounted for the hidden costs associated with N-fertilizer and the operation of lawn mowers comes at the expense of a very large use of water, N, and, not quantified in this study, pesticides. The model simulations have assumed a conservative amount of fertilization (a maximum of 146 kg N/ha/yr). In general the rates of N applications are similar to those used for row crops, and the current high-input choices made by consumers and professional turf managers for maintaining monocultures of turf grasses typical of many lawns and play fields comes at the risk, not analyzed here, of watershed pollution due to improper fertilization and use of pesticides. If the entire turf surface was well watered following commonly recommended schedules there would also be an enormous pressure on the U.S. water resources, especially when considering that drinking water is usually sprinkled. At the time of this writing, in most regions outdoor water use already reaches 50-75% of the total residential use. Because of demographic growth and because more and more people are moving towards the warmer regions of the country the potential exists for the amount of water used for turf grasses to increase. Beneficial effects of turf grasses, such as a carbon sequestration but also recreation, storm runoff reduction due to increased soil infiltration in occasion of intense rainfall, and removal of impurities and chemicals during percolation of the water through the root zone, could be sought by minimizing the application of fertilizers and pesticides, introduction of lower input species mixes such as clover and other so-called weeds (Bormann, 1993), on site decomposition of the grass clippings and extending the practice of irrigating with waste water rather than with drinking water.

For more on the subject, see:

  • Jenkins, V. S. (1994). The Lawn: A History of an American Obsession. Smithsonian Books. ISBN 1-56098-406-6.
  • Steinberg, T. (2006). American Green, The Obsessive Quest for the Perfect Lawn. W.W. Norton & Co. ISBN 0-393-06084-5.
  • Wasowski, Sally and Andy (2004). Requiem for a Lawnmower.

Article by Bill Norrington

Image 1 for article titled "The Lawn Is the Largest Irrigated Crop in the USA"
Distribution of the fractional turf grass area (%) in the conterminous U.S. (from the ISPRS article)
Image 2 for article titled "The Lawn Is the Largest Irrigated Crop in the USA"
There may be more acres of lawn in the U.S. than of the eight largest irrigated crops combined. Here are figures for the top four (http://scienceline.org/2011/07/lawns-vs-crops-in-the-continental-u-s/)
Image 3 for article titled "The Lawn Is the Largest Irrigated Crop in the USA"
Acre-feet of water used on US crops, compared to lawns. Ibid.
Image 4 for article titled "The Lawn Is the Largest Irrigated Crop in the USA"
Capability Brown's landscape design at Badminton House. Lawns may have originated as grassed enclosures within early medieval settlements used for communal grazing of livestock, as distinct from fields reserved for agriculture. It was not until the 17th and 18th century, that the garden and the lawn became a place created first as walkways and social areas. They were made up of meadow plants, such as camomile, a particular favorite. In the early 17th century, the Jacobean epoch of gardening began; during this period, the closely cut "English" lawn was born. By the end of this period, the English lawn was a symbol of status of the aristocracy and gentry; it showed that the owner could afford to keep land that was not being used for a building, or for food production. (Wikipedia: Lawn)
Image 5 for article titled "The Lawn Is the Largest Irrigated Crop in the USA"
Before the mechanical lawnmower, the upkeep of lawns was only possible for the extremely wealthy estates and manor houses of the aristocracy. This all changed with the invention of the lawnmower by Edwin Beard Budding in 1830. It took ten more years and further innovations, including the advent of the Bessemer process for the production of the much lighter alloy steel and advances in motorization such as the drive chain, for the lawnmower to become a practical proposition. Middle-class families across the country, in imitation of aristocratic landscape gardens, began to grow finely trimmed lawns in their back gardens. From the 1860s, the cultivation of lawns, especially for sports, became a middle-class obsession in England. Pictured, a lawnmower advertisement from Ransomes. Ibid.
Image 6 for article titled "The Lawn Is the Largest Irrigated Crop in the USA"
More than 90 percent of UCSB’s manicured landscape is now irrigated with recycled water, which saves 19.5 million gallons of potable water annually (The UCSB Current)

August 18, 2015 - The Influenza Pandemic of 1918

“The influenza pandemic of 1918-1919 killed more people than the Great War, known today as World War I, at somewhere between 20 and 40 million people. It has been cited as the most devastating epidemic in recorded world history. More people died of influenza in a single year than in four-years of the Black Death Bubonic Plague from 1347 to 1351. Known as "Spanish Flu" or "La Grippe," the influenza of 1918-1919 was a global disaster.” The following Stanford University article was written by Molly Billings in June 1997 and modified in 2005:

In the fall of 1918, the Great War in Europe was winding down and peace was on the horizon. The Americans had joined in the fight, bringing the Allies closer to victory against the Germans. Deep within the trenches, these men lived through some of the most brutal conditions of life, which it seemed could not be any worse. Then, in pockets across the globe, something erupted that seemed as benign as the common cold. The influenza of that season, however, was far more than a cold. In the two years that this scourge ravaged the earth, a fifth of the world's population was infected.

The flu was most deadly for people ages 20 to 40. This pattern of morbidity was unusual for influenza which is usually a killer of the elderly and young children. It infected 28% of all Americans. An estimated 675,000 Americans died of influenza during the pandemic, ten times as many as in the world war. Of the U.S. soldiers who died in Europe, half of them fell to the influenza virus and not to the enemy. An estimated 43,000 servicemen mobilized for WWI died of influenza.

1918 would go down as unforgettable year of suffering and death and yet of peace. As noted in the Journal of the American Medical Association final edition of 1918: "The 1918 has gone: a year momentous as the termination of the most cruel war in the annals of the human race; a year which marked, the end at least for a time, of man's destruction of man; unfortunately a year in which developed a most fatal infectious disease causing the death of hundreds of thousands of human beings. Medical science for four and one-half years devoted itself to putting men on the firing line and keeping them there. Now it must turn with its whole might to combating the greatest enemy of all--infectious disease."

The effect of the influenza epidemic was so severe that the average life span in the US was depressed by 10 years. The influenza virus had a profound virulence, with a mortality rate at 2.5% compared to the previous influenza epidemics, which were less than 0.1%. The death rate for 15 to 34-year-olds of influenza and pneumonia were 20 times higher in 1918 than in previous years. People were struck with illness on the street and died rapid deaths. One anecdote shared of 1918 was of four women playing bridge together late into the night. Overnight, three of the women died from influenza. Others told stories of people on their way to work suddenly developing the flu and dying within hours. One physician writes that patients with seemingly ordinary influenza would rapidly "develop the most viscous type of pneumonia that has ever been seen" and later when cyanosis appeared in the patients: "it is simply a struggle for air until they suffocate." Another physician recalls that the influenza patients "died struggling to clear their airways of a blood-tinged froth that sometimes gushed from their nose and mouth."

The physicians of the time were helpless against this powerful agent of influenza. In 1918, children would skip rope to the rhyme: I had a little bird, / Its name was Enza. / I opened the window, / And in-flu-enza. The influenza pandemic circled the globe. Most of humanity felt the effects of this strain of the influenza virus. It spread following the path of its human carriers, along trade routes and shipping lines. Outbreaks swept through North America, Europe, Asia, Africa, Brazil, and the South Pacific. In India, the mortality rate was extremely high, at around 50 deaths from influenza per 1,000 people. The Great War, with its mass movements of men in armies and aboard ships, probably aided in its rapid diffusion and attack.

The origins of the deadly flu disease were unknown but widely speculated upon. Some of the allies thought of the epidemic as a biological warfare tool of the Germans. Many thought it was a result of the trench warfare, the use of mustard gases, and the generated "smoke and fumes" of the war. A national campaign began using the ready rhetoric of war to fight the new enemy of microscopic proportions. A study attempted to reason why the disease had been so devastating in certain localized regions, looking at the climate, the weather, and the racial composition of cities. They found humidity to be linked with more severe epidemics as it "fosters the dissemination of the bacteria." Meanwhile, the new sciences of the infectious agents and immunology were racing to come up with a vaccine or therapy to stop the epidemics.

The origins of this influenza variant are not precisely known. It is thought to have originated in China in a rare genetic shift of the influenza virus. The recombination of its surface proteins created a virus novel to almost everyone and a loss of herd immunity. Recently, the virus has been reconstructed from the tissue of a dead soldier and is now being genetically characterized. The name of Spanish Flu came from the early affliction and large mortalities in Spain where it allegedly killed 8 million in May. However, a first wave of influenza appeared early in the spring of 1918 in Kansas and in military camps throughout the US.

Few noticed the epidemic in the midst of the war. Wilson had just given his 14 point address. There was virtually no response or acknowledgment to the epidemics in March and April in the military camps. It was unfortunate that no steps were taken to prepare for the usual recrudescence of the virulent influenza strain in the winter. The lack of action was later criticized when the epidemic could not be ignored in the winter of 1918. These first epidemics at training camps were a sign of what was coming in greater magnitude in the fall and winter of 1918 to the entire world.

The war brought the virus back into the US for the second wave of the epidemic. It first arrived in Boston in September of 1918 through the port busy with war shipments of machinery and supplies. The war also enabled the virus to spread and diffuse. Men across the nation were mobilizing to join the military and the cause. As they came together, they brought the virus with them and to those they contacted. The virus killed almost 200,000 in October of 1918 alone. In November 11 of 1918, the end of the war enabled resurgence. As people celebrated Armistice Day with parades and large parties, a complete disaster from the public health standpoint, a rebirth of the epidemic occurred in some cities. The flu that winter was beyond imagination as millions were infected and thousands died. Just as the war had affected the course of influenza, influenza affected the war. Entire fleets were ill with the disease, and men on the front were too sick to fight. The flu was devastating to both sides, killing more men than their own weapons could.

With the military patients coming home from the war with battle wounds and mustard gas burns, hospital facilities and staff were taxed to the limit. This created a shortage of physicians, especially in the civilian sector, as many had been lost for service with the military. Since the medical practitioners were away with the troops, only the medical students were left to care for the sick. Third and fourth year classes were closed, and the students were assigned jobs as interns or nurses. One article noted that "depletion has been carried to such an extent that the practitioners are brought very near the breaking point." The shortage was further confounded by the added loss of physicians to the epidemic. In the U.S., the Red Cross had to recruit more volunteers to contribute to the new cause at home of fighting the influenza epidemic. To respond with the fullest utilization of nurses, volunteers, and medical supplies, the Red Cross created a National Committee on Influenza. It was involved in both military and civilian sectors to mobilize all forces to fight Spanish influenza. In some areas of the US, the nursing shortage was so acute that the Red Cross had to ask local businesses to allow workers to have the day off if they volunteer in the hospitals at night. Emergency hospitals were created to take in the patients from the US and those arriving sick from overseas.

The pandemic affected everyone. With one-quarter of the US and one-fifth of the world infected with the influenza, it was impossible to escape from the illness. Even President Woodrow Wilson suffered from the flu in early 1919 while negotiating the crucial treaty of Versailles to end the World War. Those who were lucky enough to avoid infection had to deal with the public health ordinances to restrain the spread of the disease. The public health departments distributed gauze masks to be worn in public. Stores could not hold sales; funerals were limited to 15 minutes. Some towns required a signed certificate to enter, and railroads would not accept passengers without them. Those who ignored the flu ordinances had to pay steep fines enforced by extra officers. Bodies piled up as the massive deaths of the epidemic ensued. Besides the lack of health care workers and medical supplies, there was a shortage of coffins, morticians, and gravediggers. The conditions in 1918 were not so far removed from the Black Death in the era of the bubonic plague of the middle ages.

In 1918-19, this deadly influenza pandemic erupted during the final stages of World War I. Nations were already attempting to deal with the effects and costs of the war. Propaganda campaigns and war restrictions and rations had been implemented by governments. Nationalism pervaded as people accepted government authority. This allowed the public health departments to easily step in and implement their restrictive measures.

The war also gave science greater importance as governments relied on scientists, now armed with the new germ theory and the development of antiseptic surgery, to design vaccines and reduce mortalities of disease and battle wounds. Their new technologies could preserve the men on the front and ultimately save the world. These conditions created by World War I, together with the current social attitudes and ideas, led to the relatively calm response of the public and application of scientific ideas. People allowed for strict measures and loss of freedom during the war as they submitted to the needs of the nation ahead of their personal needs. They had accepted the limitations placed with rationing and drafting. The responses of the public health officials reflected the new allegiance to science and the wartime society. The medical and scientific communities had developed new theories and applied them to prevention, diagnostics, and treatment of the influenza patients.

Image 1 for article titled "The Influenza Pandemic of 1918"
The Grim Reaper by Louis Raemaekers (from the Stanford article)
Image 2 for article titled "The Influenza Pandemic of 1918"
Photo of Walter Reed Hospital, Washington, D.C., during the great Influenza Pandemic of 1918 - 1919, also known as the "Spanish Flu." Patients are set up in rows of beds on an open gallery, seperated by hung sheets. A nurse wears a cloth mask over her nose and mouth. Not dated, probably during the height of the epidemic, 1918 - 1919. Photo credit: Harris & Ewing via Library of Congress website (Wikimedia Commons)
Image 3 for article titled "The Influenza Pandemic of 1918"
Soldiers from Fort Riley, Kansas, ill with Spanish influenza at a hospital ward at Camp Funston. (Wikipedia: 1918 flu pandemic)
Image 4 for article titled "The Influenza Pandemic of 1918"
The difference between the influenza mortality age-distributions of the 1918 epidemic and normal epidemics – deaths per 100,000 persons in each age group, United States, for the interpandemic years 1911–1917 (dashed line) and the pandemic year 1918 (solid line). The global mortality rate from the 1918/1919 pandemic is not known, but an estimated 10% to 20% of those who were infected died. With about a third of the world population infected, this case-fatality ratio means 3% to 6% of the entire global population died. Influenza may have killed as many as 25 million people in its first 25 weeks. Older estimates say it killed 40–50 million people, while current estimates say 50–100 million people worldwide were killed. This pandemic has been described as "the greatest medical holocaust in history" and may have killed more people than the Black Death. It is said that this flu killed more people in 24 weeks than AIDS has killed in 24 years, more in a year than the Black Death killed in a century (Wikipedia: Ibid.)
Image 5 for article titled "The Influenza Pandemic of 1918"
A chart of deaths in major cities (Wikipedia: Ibid.)
Image 6 for article titled "The Influenza Pandemic of 1918"
Dr. Terrence Tumpey (Center for Disease Control and Prevention) examines a reconstructed version of the 1918 flu. One of the few things known for certain about the influenza in 1918 and for some years after was that it was, out of the laboratory, exclusively a disease of human beings. In 2013, AIR’s Research and Modeling Group "characterizes the historic 1918 pandemic and estimates the effects of a similar pandemic occurring today using the AIR Pandemic Flu Model". In the model, "a modern day “Spanish flu” event would result in additional life insurance losses of between USD 15.3– 27.8 billion in the United States alone" with 188,000–337,000 deaths in the United States. (Wikipedia: Ibid.)

August 17, 2015 - Tweeting Far and Wide

Location, location, location still matters in a world made smaller by the Internet and social media. Sonia Ferandez, in an article for The UCSB Current, posted on August 17, 2015 with the title above, goes on to say:

In 1970, geographer, cartographer, and UC Santa Barbara professor emeritus Waldo Tobler said, “Everything is related to everything else, but near things are more related than distant things.” This “first law of geography,” which underscores the tendency to have stronger associations and relationships with things in close proximity than with those that are farther away, is a fundamental principle in the spatially oriented discipline that is geography.

And then we all got online. The advent of social media and the Internet has put the world at our fingertips; in cyberspace, far-flung remote places are just as accessible as neighborhoods across town. Indeed, with the rise of communications technology, it has been thought that location and distance, even geography itself, would become less relevant in modern society.

But a survey of Twitter users suggests that as global as the world’s metropolises have become, the people in them tend to remain staunchly local. “The rules of geography still apply, in spite of the so-called ‘death of distance,’” said UC Santa Barbara geographer Keith Clarke, one of the authors of a paper appearing in the journal PLOS ONE. Clarke, lead author Su Yeon Han, and San Diego State University geographer and professor Ming-Hsiang Tsou tested Tobler’s law with a paper that asks whether global cities enable global views.

To do this, they looked at over a million geotagged tweets emanating from users in and around 50 designated U.S. “home cities” of different population sizes to gauge how geographically aware people were. “We selected Twitter because its messages are ‘big data,’ and cover so many topics,” said Clarke. “Also, Twitter has an open application programming interface that allows you to write scripts to selectively download tweets and their metadata.” It was from this information that the researchers compiled a Global Awareness Index (GAI), a score of users’ awareness of local and distant locations.

According to their findings, Twitter users from the larger cities tended more often to mention distant U.S. cities or other large international cities than did people from mid-size cities. This could be attributable to the large population and tendency of urbanized places to have more frequent movement of people, ideas and commodities, the researchers suggested. For instance, users in technology industry-heavy San Jose, California, demonstrated a high GAI score with tweets mentioning places near and far all over the world, while tweets collected from smaller Jacksonville, Florida — with a lower GAI — tended to concentrate on local and regional places.

Additionally, not all geographical awarenesses are alike. When comparing tweets from two large cities — Los Angeles and New York, both with similarly high GAI scores — the researchers found that the people behind those messages were more likely to mention places geographically closer to them. Angelenos tweeted most often about their own city as well as other places on the West Coast and in the American Southwest as well as Mexico. New Yorkers tweeted most often about New York City and locations on the East Coast and in New England and Canada. Distant cities mentioned in tweets tended also to be the bigger cities.

The levels of global awareness are also elastic: Around the holiday season, mentions of distant cities tended to increase, perhaps as users made travel plans or reached out to family and friends during the season, but then dropped again after the New Year. So, while large, global cities to a certain extent enable global awareness, and the Internet and social media have removed many barriers to this awareness, according to the study, people remain somewhat bound to their physical locations despite the very real possibility of conducting business and communications anywhere and everywhere in the world. “I think globalization is happening,” said Clarke, “but it is impossible to remove the effects of geographic distance and scale.”

Editor's note: Also see "The Death of Distance," a 1996 Environment and Planning B Editorial by Geography Professor Emerita Helen Couclelis.

Image 1 for article titled "Tweeting Far and Wide"
Twitter world map (from The UCSB Current)
Image 2 for article titled "Tweeting Far and Wide"
Geography Professor Keith Clarke. Trained in scientific and quantitative geography, Dr. Clarke has worked on the integration of the computer into the methods and equipment used for analysis and exploration. Specializing in analytical cartography and geographic information systems, he has conducted fieldwork on disease mapping in Africa, Maya settlements in Central America, and glaciers in Lapland. While a Resident Fellow at the Explorers Club, Dr. Clarke led the mapping for a flag-bearing expedition to Hudson’s Bay and climbed the Mexican volcano Popocatepetl. His research stretches from computer modeling of land use change to detailed mapping of terrain with LIDAR" (National Geographic; photo credit: Ibid.)
Image 3 for article titled "Tweeting Far and Wide"
Tweets were collected within a 20 mile buffer from each center of the 50 major U.S. cities (from the Plos One article, op. cit.)

August 12, 2015 - Big Data Maps the Geology of the Ocean Floor

The following is a University of Sydney news article written by media adviser Jocelyn Prasad and posted August 12, 2015, with the title: “Big Data Maps World’s Ocean Floor”:

Scientists from the University of Sydney’s School of Geosciences have led the creation of the world’s first digital map of the seafloor’s geology. It is the first time the composition of the seafloor, covering 70 percent of the Earth’s surface, has been mapped in 40 years; the most recent map was hand drawn in the 1970s.

Published in the latest edition of Geology, the map will help scientists better understand how our oceans have responded, and will respond, to environmental change. It also reveals the deep ocean basins to be much more complex than previously thought.

“In order to understand environmental change in the oceans we need to better understand what is preserved in the geological record in the seabed,” says lead researcher Dr Adriana Dutkiewicz from the University of Sydney. “The deep ocean floor is a graveyard with much of it made up of the remains of microscopic sea creatures called phytoplankton, which thrive in sunlit surface waters. The composition of these remains can help decipher how oceans have responded in the past to climate change.”

A special group of phytoplankton called diatoms produce about a quarter of the oxygen we breathe and make a bigger contribution to fighting global warming than most plants on land. Their dead remains sink to the bottom of the ocean, locking away their carbon.

The new seafloor geology map demonstrates that diatom accumulations on the seafloor are nearly entirely independent of diatom blooms in surface waters in the Southern Ocean. “This disconnect demonstrates that we understand the carbon source, but not the sink,” says co-author Professor Dietmar Muller from the University of Sydney. More research is needed to better understand this relationship.

Dr Dutkiewicz said, “Our research opens the door to future marine research voyages aimed at better understanding the workings and history of the marine carbon cycle. Australia’s new research vessel Investigator is ideally placed to further investigate the impact of environmental change on diatom productivity. We urgently need to understand how the ocean responds to climate change.”

Some of the most significant changes to the seafloor map are in the oceans surrounding Australia. “The old map suggests much of the Southern Ocean around Australia is mainly covered by clay blown off the continent, whereas our map shows this area is actually a complex patchwork of microfossil remains,” said Dr Dutkiewicz. “Life in the Southern Ocean is much richer than previously thought.”

Dr Dutkiewicz and colleagues analysed and categorised around 15,000 seafloor samples – taken over half a century on research cruise ships to generate the data for the map. She teamed with the National ICT Australia (NICTA) big data experts to find the best way to use algorithms to turn this multitude of point observations into a continuous digital map.

“Recent images of Pluto’s icy plains are spectacular, but the process of unveiling the hidden geological secrets of the abyssal plains of our own planet was equally full of surprises!” co-author Dr Simon O’Callaghan from NICTA said.

This research is supported by the Science and Industry Endowment Fund. The digital data and interactive map are freely available as open access resources.

Editor’s note: Alumna Dawn Wright notes the following:

There has been a lot of press of this particular study, and the claims in press releases from this particularly university are actually quite controversial and ruffling feathers in the seafloor mapping community. The claims to have made the "first digital map of the geology of the seafloor” and the "first time in 40 years that the composition of the seafloor has been mapped” are a bit misleading. For example, Harris et al., representing GRID-Arendal, Conservation International, and Geoscience Australia actually did this in 2013 (web site, story map) and published their results in a 2014 paper, although their map is about the geomorphology of the seafloor (geology in terms of shape and structure of the rocks and sediments). The claims of the current authors are more along the lines of geology in terms of the composition of sediments. They draw some important conclusions, especially about diatomaceous ooze. Still I would feel better about their claims to be “first” if they had referenced the prior work of others, such as the SedDB project, GeoMapApp, and Global Multi-Resolution Topography (GMRT) syntheses at Columbia University or dbSEABED at U. of Colorado, both of which have been in existence for many, many years, making global digital maps along the way. And the work at Columbia is done at a facility headed up by UCSB alumna Suzanne Carbotte, who was a student of Ken Macdonald’s in Earth Science and a graduate student collaborator of mine on my UCSB dissertation.

Image 1 for article titled "Big Data Maps the Geology of the Ocean Floor"
This is a still shot of the world's first digital map of the seafloor's geology. Credit: EarthByte Group, School of Geosciences, University of Sydney, Sydney, NSW 2006, Australia National ICT Australia (NICTA), Australian Technology Park, Eveleigh, NSW 2015, Australia
Image 2 for article titled "Big Data Maps the Geology of the Ocean Floor"
Digital maps of seafloor sediments. From Dutkiewicz et al. Geology, Aug. 5, 2015

August 11, 2015 - Author Charles L. King Donates His Book on the History of Planning in Santa Barbara County to the Geography Department

On Tuesday, August 4, 2015, in the lobby of the Santa Barbara Public Library, Chair Dan Montello accepted from Mr. Charles L. King a copy of Mr. King’s self-published book, “Santa Barbara County Planning Commission Plans for Orderly Development 1927 to 1965,” a history of planning in Santa Barbara County during the mid-20th century. Mr. King was a planner for the County of Santa Barbara from 1956 to 1988. We currently house the book in the Department of Geography’s main office in Ellison Hall, as it awaits a space in our library.

Mr. King explains the content and purpose of his book this way: “It all began in December 1927 when the Santa Barbara County Planning Commission was created by the County Board of Supervisors to make decisions on land use issues . . . . My book is an attempt to bring to life the commission, its staff and their work in detail, and give the reader a front row seat to all the land use issues they decided upon, often, without full cooperation or approval from a large segment of the community who did not understand the role of the planning commission . . . . As the population of Santa Barbara County began to grow, some citizens, not elected officials, realized there was a need to establish rules to live by in order to maintain the value of their individual properties and way of life.

Fortunately, in 1927 the California State Legislature, under Chapter 874, adopted enabling legislation allowing counties to voluntarily create a County Planning Commission. Due to this new law, a number of property owners in Montecito petitioned the Board of Supervisors to create a Planning Commission and appoint citizens to serve on the commission. The Board of Supervisors created a County Planning Commission in December 1927. The goal of the commission was to create a Master Plan for the county and to bring about orderly development with zoning and subdivision regulations . . . . Santa Barbara County became one of the first four counties in the entire United States and the first county in the State of California to create a Planning Commission.”

Image 1 for article titled "Author Charles L. King Donates His Book on the History of Planning in Santa Barbara County to the Geography Department"
The original Santa Barbara County and its subsequent versions. From Mr. King's book.
Image 2 for article titled "Author Charles L. King Donates His Book on the History of Planning in Santa Barbara County to the Geography Department"
Cartoon from 1969 highlights some of the more confusing consequences of land use planning. Ibid.

August 08, 2015 - MIT Claims To Have Found a Language Universal that Ties All Languages Together

The following article discusses the concept of a language universal which would validate Chomsky's controversial theory about a “universal grammar.” It was written as an Ars Technica article by Cathleen O’Grady and posted August 6, 2015:

Language takes an astonishing variety of forms across the world—to such a huge extent that a long-standing debate rages around the question of whether all languages have even a single property in common. Well, there’s a new candidate for the elusive title of “language universal” according to a paper in this week’s issue of PNAS. All languages, the authors say, self-organize in such a way that related concepts stay as close together as possible within a sentence, making it easier to piece together the overall meaning.

Language universals are a big deal because they shed light on heavy questions about human cognition. The most famous proponent of the idea of language universals is Noam Chomsky, who suggested a “universal grammar” that underlies all languages. Finding a property that occurs in every single language would suggest that some element of language is genetically predetermined and perhaps that there is specific brain architecture dedicated to language.

However, other researchers argue that there are vanishingly few candidates for a true language universal. They say that there is enormous diversity at every possible level of linguistic structure from the sentence right down to the individual sounds we make with our mouths (that’s without including sign languages).

There are widespread tendencies across languages, they concede, but they argue that these patterns are just a signal that languages find common solutions to common problems. Without finding a true universal, it’s difficult to make the case that language is a specific cognitive package rather than a more general result of the remarkable capabilities of the human brain.

Read the complete article here; the MIT news article is here.

Image 1 for article titled "MIT Claims To Have Found a Language Universal that Ties All Languages Together"
Credit: from the Ars Technica article; graphic credit: flickr user: Fernando Marcelo Cañuelo
Image 2 for article titled "MIT Claims To Have Found a Language Universal that Ties All Languages Together"
Now a new study of 37 languages by three MIT researchers has shown that most languages move toward “dependency length minimization” (DLM) in practice. That means language users have a global preference for more locally grouped dependent words, whenever possible. From the MIT News article, op. cit.

August 02, 2015 - California "Rain Debt" Equal to Average Full Year of Precipitation

A NASA press release dated July 20, 2015, and with the title above points out that a new NASA study has concluded California accumulated a debt of about 20 inches of precipitation between 2012 and 2015 -- the average amount expected to fall in the state in a single year. The deficit was driven primarily by a lack of air currents moving inland from the Pacific Ocean that are rich in water vapor.

In an average year, 20 to 50 percent of California's precipitation comes from relatively few, but extreme events called atmospheric rivers that move from over the Pacific Ocean to the California coast. "When they say that an atmospheric river makes landfall, it's almost like a hurricane, without the winds. They cause extreme precipitation," said study lead author Andrey Savtchenko at NASA's Goddard Space Flight Center in Greenbelt, Maryland.

Savtchenko and his colleagues examined data from 17 years of satellite observations and 36 years of combined observations and model data to understand how precipitation has varied in California since 1979. The results were published Thursday in the Journal of Geophysical Research – Atmospheres, a journal of the American Geophysical Union.

The state as a whole can expect an average of about 20 inches of precipitation each year, with regional differences. But, the total amount can vary as much as 30 percent from year to year, according to the study. In non-drought periods, wet years often alternate with dry years to balance out in the short term. However, from 2012 to 2014, California accumulated a deficit of almost 13 inches, and the 2014-2015 wet season increased the debt another seven inches, for a total 20 inches accumulated deficit during the course of three dry years.

The majority of that precipitation loss is attributed to a high-pressure system in the atmosphere over the eastern Pacific Ocean that has interfered with the formation of atmospheric rivers since 2011. Atmospheric rivers occur all over the world. They are narrow, concentrated tendrils of water vapor that travel through the atmosphere similar to, and sometimes with, the winds of a jet stream. Like a jet stream, they typically travel from west to east. The ones destined for California originate over the tropical Pacific, where warm ocean water evaporates a lot of moisture into the air. The moisture-rich atmospheric rivers, informally known as the Pineapple Express, then break northward toward North America.

Earlier this year, a NASA research aircraft participated in the 2015 field campaign to improve understanding of when and how atmospheric rivers reach California. Some of the water vapor rains out over the ocean, but the show really begins when an atmospheric river reaches land. Two reached California around Dec. 1 and 10, 2014, and brought more than three inches of rain, according to NASA's Tropical Rainfall Measuring Mission (TRMM)'s multi-satellite dataset. The inland terrain, particularly mountains, force the moist air to higher altitudes where lower pressure causes it to expand and cool. The cooler air condenses the concentrated pool of water vapor into torrential rains, or snowfall as happens over the Sierra Nevada Mountains, where water is stored in the snowpack until the spring melt just before the growing season.

The current drought isn't the first for California. Savtchenko and his colleagues recreated a climate record for 1979 to the present using the Modern-Era Retrospective Analysis for Research and Applications, or MERRA. Their efforts show that a 27.5 inch deficit of rain and snow occurred in the state between 1986 and 1994. "Drought has happened here before. It will happen again, and some research groups have presented evidence it will happen more frequently as the planet warms," Savtchenko said. "But, even if the climate doesn’t change, are our demands for fresh water sustainable?"

The current drought has been notably severe because, since the late 1980s, California's population, industry, and agriculture have experienced tremendous growth, with a correlating growth in their demand for water. Human consumption has depleted California's reservoirs and groundwater reserves, as shown by data from NASA's Gravity Recovery and Climate Experiment (GRACE) mission, leading to mandatory water rationing.

"The history of the American West is written in great decade-long droughts followed by multi-year wet periods," said climatologist Bill Patzert at NASA's Jet Propulsion Laboratory in Pasadena, California. He was not involved in the research. "Savtchenko and his team have shown how variable California rainfall is.” According to Patzert, this study added nuance to how scientists may interpret the atmospheric conditions that cause atmospheric rivers and an El Niño's capacity to bust the drought. Since March, rising sea surface temperatures in the central equatorial Pacific have indicated the formation of El Niño conditions. El Niño conditions are often associated with higher rainfall to the western United States, but it’s not guaranteed.

Savtchenko and his colleagues show that El Niño contributes only six percent to California's precipitation variability and is one factor among other, more random effects that influence how much rainfall the state receives. While it’s more likely El Niño increases precipitation in California, it’s still possible it will have no, or even a drying, effect.

A strong El Niño that lasts through the rainy months, from November to March, is more likely to increase the amount of rain that reaches California, and Savtchenko noted the current El Niño is quickly strengthening. The National Oceanic and Atmospheric Administration (NOAA), which monitors El Niño events, ranks it as the third strongest in the past 65 years for May and June. Still, it will likely take several years of higher than normal rain and snowfall to recover from the current drought.

"If this El Niño holds through winter, California’s chances to recoup some of the precipitation increase. Unfortunately, so do the chances of floods and landslides," Savtchenko said. “Most likely the effects would be felt in late 2015-2016.”

Image 1 for article titled "California "Rain Debt" Equal to Average Full Year of Precipitation"
California's accumulated precipitation “deficit” from 2012 to 2014 shown as a percent change from the 17-year average based on TRMM multi-satellite observations. Credits: NASA/Goddard Scientific Visualization Studio
Image 2 for article titled "California "Rain Debt" Equal to Average Full Year of Precipitation"
The atmospheric rivers that drenched California in December 2014 are shown in this data visualization: water vapor (white) and precipitation (red to yellow). Credits: NASA/Goddard Scientific Visualization Studio
Image 3 for article titled "California "Rain Debt" Equal to Average Full Year of Precipitation"
California drought conditions as of June 30, 2015

July 31, 2015 - Alumnus Park Williams Discusses California Climate with KQED Science Editor

“Fog season is with us once again. And whether it’s the ground-level “pea soup” of legend or the looming overcast known as the marine layer, there’s a reason it’s called California’s natural air-conditioning: fog and clouds are vital cogs in keeping the coastal thermostat turned down. But that advantage could be disappearing” (source).

KQED Science Editor Craig Miller talks with climate scientist Park Williams [PhD 2009] about his recently published work on California’s vanishing clouds. Williams is an assistant research professor at Columbia University’s Lamont-Doherty Earth Observatory in New York, but the gray mantle of the California’s summer coastline keeps drawing him back here — and it’s not just the romance of it. It turns out that fog — any kind of cloud, actually — is a great regulator not just of heat, but of drought.

Park Williams: Yeah, fog regulates drought. It does it in a couple of ways. In ecosystems, fog drops water directly on plants. And when the water collects on the plants, it then drops into the soil and is available for the plants to use. Fog, and clouds that are higher than fog, also shade the sun, and that allows plants more time to use the water they’ve collected from the fog. In cities, fog and clouds that are higher than fog — overcast clouds — are important as well, because they regulate surface temps.

Craig Miller: And it seems like cities are where the problem is.

PW: We looked at Southern California and found that in large cities — L.A. and San Diego — the heights of low clouds during summertime have been increasing; they’ve been rising away from the city.

CM: Why would that be?

PW: Cities have been warming, and essentially you need to go higher into the atmosphere before you finally get to where it’s cool enough to have water droplets condense and clouds can form.

CM: This is sounding like the “urban heat island” effect at work here. Is there a smoking gun for that?

PW: The minimum temperature at night has been rising rapidly. During the daytime we’ve seen slow warming, but not nearly at the pace that nighttime warming is. That’s the fingerprint of the urban heat island that we expect. The urban heat island effect really is a nighttime phenomenon because cement takes a long time to get rid of its heat, and that causes nighttime temperatures to rise.

CM: And where urbanization reaches inland, like, say, the Inland Empire region east of L.A., this phenomenon seems to follow. For example, looking at readings from airports, you found there’s 87 percent less fog in Ontario since 1950, and that overall cloud cover — technically the “frequency” of clouds — has been reduced by about half. That’s stunning.

PW: That means Ontario is getting a lot more sunlight in the morning hours, which is then feeding back to heat up Ontario and make clouds less likely in the future.

CM: You’re describing a kind of vicious cycle.

PW: Clouds will become thinner over Los Angeles. That allows more sunlight to be absorbed by the ground, which causes more surface heat, which causes clouds to have to form higher up, which causes clouds to be thinner, which perpetuates this process of more sunlight, higher clouds — and eventually more sunlight, no clouds.

CM: But you don’t foresee fog and overcast vanishing everywhere along the coast, only in the most urbanized areas?

PW: It depends on where you are. Since these fog and low marine clouds during the summer are regulators of drought, and since global warming is projected to enhance drought in much of California, these clouds could be very nice moderators of the global warming process and increased drought in coastal California.

CM: But in the cities …

PW: Then we see basically the moderating effect of these clouds probably getting canceled out, and rapid increases of drought in the mountain ecosystems surrounding the cities of Southern California.

CM: It sounds like when you get north into the coast redwoods, which are so dependent on the fog, the prognosis isn’t so bad.

PW: I think it’s not so bad. We’ll have to wait and see. Certainly these clouds are complicated and there are aspects to them we still don’t understand so well. We’ve had a tough time getting computers to actually model the behavior of these clouds.

CM: It makes you wonder if we might come to miss the June Gloom.

PW: I think the ways it’ll be missed are — energy bills rise because everything’s warmer, heat waves will be warmer and that’ll have some public health implications. But there’ll be benefits, too. People like going to beach when it’s sunny and not cloudy, so June Gloom gets in the way of family vacations. It’ll be nice to have better beach weather.

CM: I’d call that a silver lining except I think you need a cloud for that.

Image 1 for article titled "Alumnus Park Williams Discusses California Climate with KQED Science Editor"
Sunlight through Marin fog. (From the KQED article; Brocken Inaglory/Wikimedia Commons)
Image 2 for article titled "Alumnus Park Williams Discusses California Climate with KQED Science Editor"
Fog settles over San Francisco Bay, with the Golden Gate Bridge, Coit Tower and Bay Bridge visible in the distance. Ibid.
Image 3 for article titled "Alumnus Park Williams Discusses California Climate with KQED Science Editor"
“Fog Finger” by Susan Baumgart, Senior Artist, Web Master, and Photographer for the Department until her untimely death in 2005. “Fog Slakes Redwoods’ Thirst: Fog blanketed the ocean. A robust wind packed the shoreward edge to the Santa Lucia Mountains, which dove steeply into the Pacific. Big Creek, which eroded the mountains faster than they rose, cut a narrow passage to the sea. The fog blasted through the gap, poking a finger of fog upstream. The moisture in the fog slaked the thirst of the redwoods. Without the fog, the redwoods would not survive the summer drought. With the fog, Big Creek was the trees’ southernmost stand.”
Image 4 for article titled "Alumnus Park Williams Discusses California Climate with KQED Science Editor"
Park Williams received his PhD in 2009 (Christopher Still, Chair); dissertation title: “Tree Rings, Climate Variability, and Coastal Summer Stratus Clouds in the Western United States.” Currently, Park is a Lamont Assistant Research Professor at the Lamont Doherty Earth Observatory of Columbia University.

July 30, 2015 - Fossil Fuels May Bring Major Changes to Carbon Dating

The following Climate Central article was written by Alison Kanski and posted July 28, 2015, with the title above:

Radiocarbon dating has been helping put the planet’s history in the right order since it was first invented in the 1940s, giving scientists a key way to determine the age of artifacts like the Dead Sea Scrolls and the Shroud of Turin. Thanks to fossil fuel emissions, though, the method used to date these famous artifacts may be in for a change.

The burning of fossil fuels is altering the ratio of carbon in the atmosphere, which may cause objects tested in the coming decades to seem hundreds or thousands of years older than they actually are, according a study published in the Proceedings of the National Academy of Sciences. A cotton T-shirt manufactured and tested in 2050 may appear to be the same age as an artifact from the 11th century when dated using the radiocarbon method. A new shirt made in 2100, if emissions continue unabated, could appear to come from the year 100, alongside something worn by a Roman soldier. In short, future human emissions may alter one of the most reliable methods for learning about the past.

Radiocarbon dating relies on the amount of radiocarbon, or carbon-14, remaining in an object to determine its approximate age. Radiocarbon is a radioactive form of carbon that’s created when nitrogen reacts with cosmic rays in the upper atmosphere. It occurs only in trace amounts, but it is present in every living thing.

Carbon-14 can combine with oxygen in the atmosphere to create carbon dioxide, which is then absorbed by plants and makes its way through the food chain. The amount of carbon-14 in living plants and animals matches the amount in the atmosphere, but when plants and animals die, they no longer absorb carbon-14. Because radiocarbon has a known rate of decay, scientists can determine about how long it has been since the plant or animal was alive. The lower the amount of radiocarbon, the older the object.

But big changes in the atmosphere can throw off this method, like releasing tons of extra carbon dioxide into the air from burning fossil fuels. Because fossil fuels like coal and oil are so old, they have no radiocarbon left. When burned, they increase the amount of carbon dioxide, which dilutes the radiocarbon in the atmosphere and the amount that can be absorbed by organic material. “Fossil fuels have lost all of their radiocarbon over millions of years of radioactive decay,” said Heather Graven, author of the study published last week. “This makes the atmosphere appear as though it has ‘aged.’”

Scientists are used to a bit of wiggle with carbon-14 dating; it can vary as much as 30 to 100 years from the actual age. But the changes from emissions will require some extra adjustment, even in the study’s best-case-scenario emissions projection. “If emissions are rapidly reduced, then the decrease in the fraction of radiocarbon in the atmosphere will be equivalent to only about a hundred years of radioactive decay,” said Graven.

For those who use carbon dating, like archaeologists, physicists, forensic scientists, and even art historians, the change in radiocarbon will complicate their work, according to Timothy Jull, a radiocarbon scientist. “There are all these complicated effects,” said Jull, also a professor of geosciences at University of Arizona, who was not involved in the study. “There will be some ambiguities about whether it’s 300 years old or relatively recent. We’re going to need more information.”

Scientists could begin seeing the effects on radiocarbon as soon as 2020, when the ratio is expected to drop below pre-industrial levels, according to Graven. But she hopes the projections in her study could also help scientists prepare for the changes to come. And, hopefully, they will keep scientists from mistaking a T-shirt from 2050 for William the Conqueror's blouse.

Image 1 for article titled "Fossil Fuels May Bring Major Changes to Carbon Dating"
Part of the Great Isaiah Scroll, one of the Dead Sea Scrolls. The development of radiocarbon dating has had a profound impact on archaeology; it is often described as the "radiocarbon revolution." In the words of anthropologist R. E. Taylor, "14C data made a world prehistory possible by contributing a time scale that transcends local, regional, and continental boundaries." It provides more accurate dating within sites than previous methods, which were usually derived from either stratigraphy or typologies (e.g., of stone tools or pottery); it also allows comparison and synchronization of events across great distances. The advent of radiocarbon dating may even have led to better field methods in archaeology, since better data recording leads to firmer association of objects with the samples to be tested. These improved field methods were sometimes motivated by attempts to prove that a 14C date was incorrect. Taylor also suggests that the availability of definite date information freed archaeologists from the need to focus so much of their energy on determining the dates of their finds, and led to an expansion of the questions archaeologists were willing to research. For example, questions about the evolution of human behavior were much more frequently seen in archaeology, beginning in the 1970s. Wikipedia: Radiocarbon dating
Image 2 for article titled "Fossil Fuels May Bring Major Changes to Carbon Dating"
A mammoth molar that has been cored for a radiocarbon sample. From the Climate Central article; photo credit: Travis/flickr
For more news see the News Archive
Copyright © 2015 University of California, Santa Barbara | The Regents of the University of California | Terms of Use
ADA accessibility reviewed February 2015. If any of this material is not accessible to you, please contact our department at (805)893-3663 or contact
and we will provide alternatives.

Google+  Facebook  Twitter  Instagram  LinkedIn