Noam chomsky :The Sledgehammer Worldview

The front page of The New York Times on June 26 featured a photo of women mourning a murdered Iraqi.

He is one of the innumerable victims of the ISIS (Islamic State in Iraq and Syria) campaign in which the Iraqi army, armed and trained by the U.S. for many years, quickly melted away, abandoning much of Iraq to a few thousand militants, hardly a new experience in imperial history.

Right above the picture is the newspaper’s famous motto: “All the News That’s Fit to Print.”

There is a crucial omission. The front page should display the words of the Nuremberg judgment of prominent Nazis — words that must be repeated until they penetrate general consciousness: Aggression is “the supreme international crime differing only from other war crimes in that it contains within itself the accumulated evil of the whole.”

And alongside these words should be the admonition of the chief prosecutor for the United States, Robert Jackson: “The record on which we judge these defendants is the record on which history will judge us tomorrow. To pass these defendants a poisoned chalice is to put it to our own lips as well.”

The U.S.-U.K. invasion of Iraq was a textbook example of aggression. Apologists invoke noble intentions, which would be irrelevant even if the pleas were sustainable.

For the World War II tribunals, it mattered not a jot that Japanese imperialists were intent on bringing an “earthly paradise” to the Chinese they were slaughtering, or that Hitler sent troops into Poland in 1939 in self-defense against the “wild terror” of the Poles. The same holds when we sip from the poisoned chalice.

Those at the wrong end of the club have few illusions. Abdel Bari Atwan, editor of a Pan-Arab website, observes that “the main factor responsible for the current chaos [in Iraq] is the U.S./Western occupation and the Arab backing for it. Any other claim is misleading and aims to divert attention [away] from this truth.”

In a recent interview with Moyers & Company, Iraq specialist Raed Jarrar outlines what we in the West should know. Like many Iraqis, he is half-Shiite, half-Sunni, and in preinvasion Iraq he barely knew the religious identities of his relatives because “sect wasn’t really a part of the national consciousness.”

Jarrar reminds us that “this sectarian strife that is destroying the country … clearly began with the U.S. invasion and occupation.”

The aggressors destroyed “Iraqi national identity and replaced it with sectarian and ethnic identities,” beginning immediately when the U.S. imposed a Governing Council based on sectarian identity, a novelty for Iraq.

By now, Shiites and Sunnis are the bitterest enemies, thanks to the sledgehammer wielded by Donald Rumsfeld and Dick Cheney (respectively the former U.S. Secretary of Defense and vice president during the George W. Bush administration) and others like them who understand nothing beyond violence and terror and have helped to create conflicts that are now tearing the region to shreds.

Other headlines report the resurgence of the Taliban in Afghanistan. Journalist Anand Gopal explains the reasons in his remarkable book, No Good Men Among the Living: America, the Taliban, and the War through Afghan Eyes.

In 2001-02, when the U.S. sledgehammer struck Afghanistan, the al-Qaida outsiders there soon disappeared and the Taliban melted away, many choosing in traditional style to accommodate to the latest conquerors.

But Washington was desperate to find terrorists to crush. The strongmen they imposed as rulers quickly discovered that they could exploit Washington’s blind ignorance and attack their enemies, including those eagerly collaborating with the American invaders.

Soon the country was ruled by ruthless warlords, while many former Taliban who sought to join the new order recreated the insurgency.

The sledgehammer was later picked up by President Obama as he “led from behind” in smashing Libya.

In March 2011, amid an Arab Spring uprising against Libyan ruler Moammar Gadhafi, the U.N. Security Council passed Resolution 1973, calling for “a cease-fire and a complete end to violence and all attacks against, and abuses of, civilians.”

The imperial triumvirate — France, England, the U.S. — instantly chose to violate the Resolution, becoming the air force of the rebels and sharply enhancing violence.

Their campaign culminated in the assault on Gadhafi’s refuge in Sirte, which they left “utterly ravaged,” “reminiscent of the grimmest scenes from Grozny, towards the end of Russia’s bloody Chechen war,” according to eyewitness reports in the British press. At a bloody cost, the triumvirate accomplished its goal of regime change in violation of pious pronouncements to the contrary.

The African Union strongly opposed the triumvirate assault. As reported by Africa specialist Alex de Waal in the British journal International Affairs, the AU established a “road map” calling for cease-fire, humanitarian assistance, protection of African migrants (who were largely slaughtered or expelled) and other foreign nationals, and political reforms to eliminate “the causes of the current crisis,” with further steps to establish “an inclusive, consensual interim government, leading to democratic elections.”

The AU framework was accepted in principle by Gadhafi but dismissed by the triumvirate, who “were uninterested in real negotiations,” de Waal observes.

The outcome is that Libya is now torn by warring militias, while jihadi terror has been unleashed in much of Africa along with a flood of weapons, reaching also to Syria.

There is plenty of evidence of the consequences of resort to the sledgehammer. Take the Democratic Republic of Congo, formerly the Belgian Congo, a huge country rich in resources — and one of the worst contemporary horror stories. It had a chance for successful development after independence in 1960, under the leadership of Prime Minister Patrice Lumumba.

But the West would have none of that. CIA head Allen Dulles determined that Lumumba’s “removal must be an urgent and prime objective” of covert action, not least because U.S. investments might have been endangered by what internal documents refer to as “radical nationalists.”

Under the supervision of Belgian officers, Lumumba was murdered, realizing President Eisenhower’s wish that he “would fall into a river full of crocodiles.” Congo was handed over to the U.S. favorite, the murderous and corrupt dictator Mobutu Sese Seko, and on to today’s wreckage of Africa’s hopes.

Closer to home it is harder to ignore the consequences of U.S. state terror. There is now great concern about the flood of children fleeing to the U.S. from Central America.

The Washington Post reports that the surge is “mostly from Guatemala, El Salvador and Honduras” — but not Nicaragua. Why? Could it be that when Washington’s sledgehammer was battering the region in the 1980s, Nicaragua was the one country that had an army to defend the population from U.S.-run terrorists, while in the other three countries the terrorists devastating the countries were the armies equipped and trained by Washington?

Obama has proposed a humanitarian response to the tragic influx: more efficient deportation. Do alternatives come to mind?

It is unfair to omit exercises of “soft power” and the role of the private sector. A good example is Chevron’s decision to abandon its widely touted renewable energy programs, because fossil fuels are far more profitable.

Exxon Mobil in turn announced “that its laserlike focus on fossil fuels is a sound strategy, regardless of climate change,” Bloomberg Businessweek reports, “because the world needs vastly more energy and the likelihood of significant carbon reductions is ‘highly unlikely.’”

It is therefore a mistake to remind readers daily of the Nuremberg judgment. Aggression is no longer the “supreme international crime.” It cannot compare with destruction of the lives of future generations to ensure bigger bonuses tomorrow.

Noam chomsky : Nightmare in Gaza

Amid all the horrors unfolding in the latest Israeli offensive in Gaza, Israel’s goal is simple: quiet-for-quiet, a return to the norm.

For the West Bank, the norm is that Israel continues its illegal construction of settlements and infrastructure so that it can integrate into Israel whatever might be of value, meanwhile consigning Palestinians to unviable cantons and subjecting them to repression and violence.

For Gaza, the norm is a miserable existence under a cruel and destructive siege that Israel administers to permit bare survival but nothing more.

The latest Israeli rampage was set off by the brutal murder of three Israeli boys from a settler community in the occupied West Bank. A month before, two Palestinian boys were shot dead in the West Bank city of Ramallah. That elicited little attention, which is understandable, since it is routine.

“The institutionalized disregard for Palestinian life in the West helps explain not only why Palestinians resort to violence,” Middle East analyst Mouin Rabbani reports, “but also Israel’s latest assault on the Gaza Strip.”

In an interview, human rights lawyer Raji Sourani, who has remained in Gaza through years of Israeli brutality and terror, said, “The most common sentence I heard when people began to talk about cease-fire: Everybody says it’s better for all of us to die and not go back to the situation we used to have before this war. We don’t want that again. We have no dignity, no pride; we are just soft targets, and we are very cheap. Either this situation really improves or it is better to just die. I am talking about intellectuals, academics, ordinary people: Everybody is saying that.”

In January 2006, Palestinians committed a major crime: They voted the wrong way in a carefully monitored free election, handing control of Parliament to Hamas.

The media constantly intone that Hamas is dedicated to the destruction of Israel. In reality, Hamas leaders have repeatedly made it clear that Hamas would accept a two-state settlement in accord with the international consensus that has been blocked by the U.S. and Israel for 40 years.

In contrast, Israel is dedicated to the destruction of Palestine, apart from some occasional meaningless words, and is implementing that commitment.

The crime of the Palestinians in January 2006 was punished at once. The U.S. and Israel, with Europe shamefully trailing behind, imposed harsh sanctions on the errant population and Israel stepped up its violence.

The U.S. and Israel quickly initiated plans for a military coup to overthrow the elected government. When Hamas had the effrontery to foil the plans, the Israeli assaults and the siege became far more severe.

There should be no need to review again the dismal record since. The relentless siege and savage attacks are punctuated by episodes of “mowing the lawn,” to borrow Israel’s cheery expression for its periodic exercises in shooting fish in a pond as part of what it calls a “war of defense.”

Once the lawn is mowed and the desperate population seeks to rebuild somehow from the devastation and the murders, there is a cease-fire agreement. The most recent cease-fire was established after Israel’s October 2012 assault, called Operation Pillar of Defense .

Though Israel maintained its siege, Hamas observed the cease-fire, as Israel concedes. Matters changed in April of this year when Fatah and Hamas forged a unity agreement that established a new government of technocrats unaffiliated with either party.

Israel was naturally furious, all the more so when even the Obama administration joined the West in signaling approval. The unity agreement not only undercuts Israel’s claim that it cannot negotiate with a divided Palestine but also threatens the long-term goal of dividing Gaza from the West Bank and pursuing its destructive policies in both regions.

Something had to be done, and an occasion arose on June 12, when the three Israeli boys were murdered in the West Bank. Early on, the Netanyahu government knew that they were dead, but pretended otherwise, which provided the opportunity to launch a rampage in the West Bank, targeting Hamas.

Prime Minister Benjamin Netanyahu claimed to have certain knowledge that Hamas was responsible. That too was a lie.

One of Israel’s leading authorities on Hamas, Shlomi Eldar, reported almost at once that the killers very likely came from a dissident clan in Hebron that has long been a thorn in the side of Hamas. Eldar added that “I’m sure they didn’t get any green light from the leadership of Hamas, they just thought it was the right time to act.”

The 18-day rampage after the kidnapping, however, succeeded in undermining the feared unity government, and sharply increasing Israeli repression. Israel also conducted dozens of attacks in Gaza, killing five Hamas members on July 7.

Hamas finally reacted with its first rockets in 19 months, providing Israel with the pretext for Operation Protective Edge on July 8.

By July 31, around 1,400 Palestinians had been killed, mostly civilians, including hundreds of women and children. And three Israeli civilians. Large areas of Gaza had been turned into rubble. Four hospitals had been attacked, each another war crime.

Israeli officials laud the humanity of what it calls “the most moral army in the world,” which informs residents that their homes will be bombed. The practice is “sadism, sanctimoniously disguising itself as mercy,” in the words of Israeli journalist Amira Hass: “A recorded message demanding hundreds of thousands of people leave their already targeted homes, for another place, equally dangerous, 10 kilometers away.”

In fact, there is no place in the prison of Gaza safe from Israeli sadism, which may even exceed the terrible crimes of Operation Cast Lead in 2008-2009.

The hideous revelations elicited the usual reaction from the most moral president in the world, Barack Obama: great sympathy for Israelis, bitter condemnation of Hamas and calls for moderation on both sides.

When the current attacks are called off, Israel hopes to be free to pursue its criminal policies in the occupied territories without interference, and with the U.S. support it has enjoyed in the past.

Gazans will be free to return to the norm in their Israeli-run prison, while in the West Bank, Palestinians can watch in peace as Israel dismantles what remains of their possessions.

That is the likely outcome if the U.S. maintains its decisive and virtually unilateral support for Israeli crimes and its rejection of the long-standing international consensus on diplomatic settlement. But the future will be quite different if the U.S. withdraws that support.

In that case it would be possible to move toward the “enduring solution” in Gaza that U.S. Secretary of State John Kerry called for, eliciting hysterical condemnation in Israel because the phrase could be interpreted as calling for an end to Israel’s siege and regular attacks. And – horror of horrors – the phrase might even be interpreted as calling for implementation of international law in the rest of the occupied territories.

Forty years ago Israel made the fateful decision to choose expansion over security, rejecting a full peace treaty offered by Egypt in return for evacuation from the occupied Egyptian Sinai, where Israel was initiating extensive settlement and development projects. Israel has adhered to that policy ever since.

If the U.S. decided to join the world, the impact would be great. Over and over, Israel has abandoned cherished plans when Washington has so demanded. Such are the relations of power between them.

Furthermore, Israel by now has little recourse, after having adopted policies that turned it from a country that was greatly admired to one that is feared and despised, policies it is pursuing with blind determination today in its march toward moral deterioration and possible ultimate destruction.

Could U.S. policy change? It’s not impossible. Public opinion has shifted considerably in recent years, particularly among the young, and it cannot be completely ignored.

For some years there has been a good basis for public demands that Washington observe its own laws and cut off military aid to Israel. U.S. law requires that “no security assistance may be provided to any country the government of which engages in a consistent pattern of gross violations of internationally recognized human rights.”

Israel most certainly is guilty of this consistent pattern, and has been for many years.

Sen. Patrick Leahy of Vermont, author of this provision of the law, has brought up its potential applicability to Israel in specific cases, and with a well-conducted educational, organizational and activist effort such initiatives could be pursued successively.

That could have a very significant impact in itself, while also providing a springboard for further actions to compel Washington to become part of “the international community” and to observe international law and norms.

Nothing could be more significant for the tragic Palestinian victims of many years of violence and repression.

  • By Noam Chomsky
  • Truthout, August 3, 2014

How Many Minutes to Midnight?

If some extraterrestrial species were compiling a history of Homo sapiens, they might well break their calendar into two eras: BNW (before nuclear weapons) and NWE (the nuclear weapons era). The latter era, of course, opened on August 6, 1945, the first day of the countdown to what may be the inglorious end of this strange species, which attained the intelligence to discover the effective means to destroy itself, but — so the evidence suggests — not the moral and intellectual capacity to control its worst instincts.

Day one of the NWE was marked by the “success” of Little Boy, a simple atomic bomb. On day four, Nagasaki experienced the technological triumph of Fat Man, a more sophisticated design. Five days later came what the official Air Force history calls the “grand finale,” a 1,000-plane raid — no mean logistical achievement — attacking Japan’s cities and killing many thousands of people, with leaflets falling among the bombs reading “Japan has surrendered.” Truman announced that surrender before the last B-29 returned to its base.

Those were the auspicious opening days of the NWE. As we now enter its 70th year, we should be contemplating with wonder that we have survived. We can only guess how many years remain.

Some reflections on these grim prospects were offered by General Lee Butler, former head of the U.S. Strategic Command (STRATCOM), which controls nuclear weapons and strategy. Twenty years ago, he wrote that we had so far survived the NWE “by some combination of skill, luck, and divine intervention, and I suspect the latter in greatest proportion.”

Reflecting on his long career in developing nuclear weapons strategies and organizing the forces to implement them efficiently, he described himself ruefully as having been “among the most avid of these keepers of the faith in nuclear weapons.” But, he continued, he had come to realize that it was now his “burden to declare with all of the conviction I can muster that in my judgment they served us extremely ill.” And he asked, “By what authority do succeeding generations of leaders in the nuclear-weapons states usurp the power to dictate the odds of continued life on our planet? Most urgently, why does such breathtaking audacity persist at a moment when we should stand trembling in the face of our folly and united in our commitment to abolish its most deadly manifestations?”

He termed the U.S. strategic plan of 1960 that called for an automated all-out strike on the Communist world “the single most absurd and irresponsible document I have ever reviewed in my life.” Its Soviet counterpart was probably even more insane. But it is important to bear in mind that there are competitors, not least among them the easy acceptance of extraordinary threats to survival.

Survival in the Early Cold War Years

According to received doctrine in scholarship and general intellectual discourse, the prime goal of state policy is “national security.” There is ample evidence, however, that the doctrine of national security does not encompass the security of the population. The record reveals that, for instance, the threat of instant destruction by nuclear weapons has not ranked high among the concerns of planners. That much was demonstrated early on, and remains true to the present moment.

In the early days of the NWE, the U.S. was overwhelmingly powerful and enjoyed remarkable security: it controlled the hemisphere, the Atlantic and Pacific oceans, and the opposite sides of those oceans as well. Long before World War II, it had already become by far the richest country in the world, with incomparable advantages. Its economy boomed during the war, while other industrial societies were devastated or severely weakened. By the opening of the new era, the U.S. possessed about half of total world wealth and an even greater percentage of its manufacturing capacity.

There was, however, a potential threat: intercontinental ballistic missiles with nuclear warheads. That threat was discussed in the standard scholarly study of nuclear policies, carried out with access to high-level sources — Danger and Survival: Choices About the Bomb in the First Fifty Years by McGeorge Bundy, national security adviser during the Kennedy and Johnson presidencies.

Bundy wrote that “the timely development of ballistic missiles during the Eisenhower administration is one of the best achievements of those eight years. Yet it is well to begin with a recognition that both the United States and the Soviet Union might be in much less nuclear danger today if [those] missiles had never been developed.” He then added an instructive comment: “I am aware of no serious contemporary proposal, in or out of either government, that ballistic missiles should somehow be banned by agreement.” In short, there was apparently no thought of trying to prevent the sole serious threat to the U.S., the threat of utter destruction in a nuclear war with the Soviet Union.

Could that threat have been taken off the table? We cannot, of course, be sure, but it was hardly inconceivable. The Russians, far behind in industrial development and technological sophistication, were in a far more threatening environment. Hence, they were significantly more vulnerable to such weapons systems than the U.S. There might have been opportunities to explore these possibilities, but in the extraordinary hysteria of the day they could hardly have even been perceived. And that hysteria was indeed extraordinary. An examination of the rhetoric of central official documents of that moment like National Security Council Paper NSC-68 remains quite shocking, even discounting Secretary of State Dean Acheson’s injunction that it is necessary to be “clearer than truth.”

One indication of possible opportunities to blunt the threat was a remarkable proposal by Soviet ruler Joseph Stalin in 1952, offering to allow Germany to be unified with free elections on the condition that it would not then join a hostile military alliance. That was hardly an extreme condition in light of the history of the past half-century during which Germany alone had practically destroyed Russia twice, exacting a terrible toll.

Stalin’s proposal was taken seriously by the respected political commentator James Warburg, but otherwise mostly ignored or ridiculed at the time. Recent scholarship has begun to take a different view. The bitterly anti-Communist Soviet scholar Adam Ulam has taken the status of Stalin’s proposal to be an “unresolved mystery.” Washington “wasted little effort in flatly rejecting Moscow’s initiative,” he has written, on grounds that “were embarrassingly unconvincing.” The political, scholarly, and general intellectual failure left open “the basic question,” Ulam added: “Was Stalin genuinely ready to sacrifice the newly created German Democratic Republic (GDR) on the altar of real democracy,” with consequences for world peace and for American security that could have been enormous?

Reviewing recent research in Soviet archives, one of the most respected Cold War scholars, Melvyn Leffler, has observed that many scholars were surprised to discover “[Lavrenti] Beria — the sinister, brutal head of the [Russian] secret police — propos[ed] that the Kremlin offer the West a deal on the unification and neutralization of Germany,” agreeing “to sacrifice the East German communist regime to reduce East-West tensions” and improve internal political and economic conditions in Russia — opportunities that were squandered in favor of securing German participation in NATO.

Under the circumstances, it is not impossible that agreements might then have been reached that would have protected the security of the American population from the gravest threat on the horizon. But that possibility apparently was not considered, a striking indication of how slight a role authentic security plays in state policy.

The Cuban Missile Crisis and Beyond

That conclusion was underscored repeatedly in the years that followed. When Nikita Khrushchev took control in Russia in 1953 after Stalin’s death, he recognized that the USSR could not compete militarily with the U.S., the richest and most powerful country in history, with incomparable advantages. If it ever hoped to escape its economic backwardness and the devastating effect of the last world war, it would need to reverse the arms race.

Accordingly, Khrushchev proposed sharp mutual reductions in offensive weapons. The incoming Kennedy administration considered the offer and rejected it, instead turning to rapid military expansion, even though it was already far in the lead. The late Kenneth Waltz, supported by other strategic analysts with close connections to U.S. intelligence, wrote then that the Kennedy administration “undertook the largest strategic and conventional peace-time military build-up the world has yet seen… even as Khrushchev was trying at once to carry through a major reduction in the conventional forces and to follow a strategy of minimum deterrence, and we did so even though the balance of strategic weapons greatly favored the United States.” Again, harming national security while enhancing state power.

U.S. intelligence verified that huge cuts had indeed been made in active Soviet military forces, both in terms of aircraft and manpower. In 1963, Khrushchev again called for new reductions. As a gesture, he withdrew troops from East Germany and called on Washington to reciprocate. That call, too, was rejected. William Kaufmann, a former top Pentagon aide and leading analyst of security issues, described the U.S. failure to respond to Khrushchev’s initiatives as, in career terms, “the one regret I have.”

The Soviet reaction to the U.S. build-up of those years was to place nuclear missiles in Cuba in October 1962 to try to redress the balance at least slightly. The move was also motivated in part by Kennedy’s terrorist campaign against Fidel Castro’s Cuba, which was scheduled to lead to invasion that very month, as Russia and Cuba may have known. The ensuing “missile crisis” was “the most dangerous moment in history,” in the words of historian Arthur Schlesinger, Kennedy’s adviser and confidant.

As the crisis peaked in late October, Kennedy received a secret letter from Khrushchev offering to end it by simultaneous public withdrawal of Russian missiles from Cuba and U.S. Jupiter missiles from Turkey. The latter were obsolete missiles, already ordered withdrawn by the Kennedy administration because they were being replaced by far more lethal Polaris submarines to be stationed in the Mediterranean.

Kennedy’s subjective estimate at that moment was that if he refused the Soviet premier’s offer, there was a 33% to 50% probability of nuclear war — a war that, as President Eisenhower had warned, would have destroyed the northern hemisphere. Kennedy nonetheless refused Khrushchev’s proposal for public withdrawal of the missiles from Cuba and Turkey; only the withdrawal from Cuba could be public, so as to protect the U.S. right to place missiles on Russia’s borders or anywhere else it chose.

It is hard to think of a more horrendous decision in history — and for this, he is still highly praised for his cool courage and statesmanship.

Ten years later, in the last days of the 1973 Israel-Arab war, Henry Kissinger, then national security adviser to President Nixon, called a nuclear alert. The purpose was to warn the Russians not to interfere with his delicate diplomatic maneuvers designed to ensure an Israeli victory, but of a limited sort so that the U.S. would still be in control of the region unilaterally. And the maneuvers were indeed delicate. The U.S. and Russia had jointly imposed a cease-fire, but Kissinger secretly informed the Israelis that they could ignore it. Hence the need for the nuclear alert to frighten the Russians away. The security of Americans had its usual status.

Ten years later, the Reagan administration launched operations to probe Russian air defenses by simulating air and naval attacks and a high-level nuclear alert that the Russians were intended to detect. These actions were undertaken at a very tense moment. Washington was deploying Pershing II strategic missiles in Europe with a five-minute flight time to Moscow. President Reagan had also announced the Strategic Defense Initiative (“Star Wars”) program, which the Russians understood to be effectively a first-strike weapon, a standard interpretation of missile defense on all sides. And other tensions were rising.

Naturally, these actions caused great alarm in Russia, which, unlike the U.S., was quite vulnerable and had repeatedly been invaded and virtually destroyed. That led to a major war scare in 1983. Newly released archives reveal that the danger was even more severe than historians had previously assumed. A CIA study entitled “The War Scare Was for Real” concluded that U.S. intelligence may have underestimated Russian concerns and the threat of a Russian preventative nuclear strike. The exercises “almost became a prelude to a preventative nuclear strike,” according to an account in the Journal of Strategic Studies.

It was even more dangerous than that, as we learned last September, when the BBC reported that right in the midst of these world-threatening developments, Russia’s early-warning systems detected an incoming missile strike from the United States, sending its nuclear system onto the highest-level alert. The protocol for the Soviet military was to retaliate with a nuclear attack of its own. Fortunately, the officer on duty, Stanislav Petrov, decided to disobey orders and not report the warnings to his superiors. He received an official reprimand. And thanks to his dereliction of duty, we’re still alive to talk about it.

The security of the population was no more a high priority for Reagan administration planners than for their predecessors. And so it continues to the present, even putting aside the numerous near-catastrophic nuclear accidents that occurred over the years, many reviewed in Eric Schlosser’s chilling study Command and Control: Nuclear Weapons, the Damascus Accident, and the Illusion of Safety. In other words, it is hard to contest General Butler’s conclusions.

Survival in the Post-Cold War Era

The record of post-Cold War actions and doctrines is hardly reassuring either. Every self-respecting president has to have a doctrine. The Clinton Doctrine was encapsulated in the slogan “multilateral when we can, unilateral when we must.” In congressional testimony, the phrase “when we must” was explained more fully: the U.S. is entitled to resort to “unilateral use of military power” to ensure “uninhibited access to key markets, energy supplies, and strategic resources.” Meanwhile, STRATCOM in the Clinton era produced an important study entitled “Essentials of Post-Cold War Deterrence,” issued well after the Soviet Union had collapsed and Clinton was extending President George H.W. Bush’s program of expanding NATO to the east in violation of promises to Soviet Premier Mikhail Gorbachev — with reverberations to the present.

That STRATCOM study was concerned with “the role of nuclear weapons in the post-Cold War era.” A central conclusion: that the U.S. must maintain the right to launch a first strike, even against non-nuclear states. Furthermore, nuclear weapons must always be at the ready because they “cast a shadow over any crisis or conflict.” They were, that is, constantly being used, just as you’re using a gun if you aim but don’t fire one while robbing a store (a point that Daniel Ellsberg has repeatedly stressed). STRATCOM went on to advise that “planners should not be too rational about determining… what the opponent values the most.” Everything should simply be targeted. “[I]t hurts to portray ourselves as too fully rational and cool-headed … That the U.S. may become irrational and vindictive if its vital interests are attacked should be a part of the national persona we project.” It is “beneficial [for our strategic posture] if some elements may appear to be potentially ‘out of control,’” thus posing a constant threat of nuclear attack — a severe violation of the U.N. Charter, if anyone cares.

Not much here about the noble goals constantly proclaimed — or for that matter the obligation under the Non-Proliferation Treaty to make “good faith” efforts to eliminate this scourge of the earth. What resounds, rather, is an adaptation of Hilaire Belloc’s famous couplet about the Maxim gun (to quote the great African historian Chinweizu):

“Whatever happens, we have got,

The Atom Bomb, and they have not.”

After Clinton came, of course, George W. Bush, whose broad endorsement of preventative war easily encompassed Japan’s attack in December 1941 on military bases in two U.S. overseas possessions, at a time when Japanese militarists were well aware that B-17 Flying Fortresses were being rushed off assembly lines and deployed to those bases with the intent “to burn out the industrial heart of the Empire with fire-bomb attacks on the teeming bamboo ant heaps of Honshu and Kyushu.” That was how the prewar plans were described by their architect, Air Force General Claire Chennault, with the enthusiastic approval of President Franklin Roosevelt, Secretary of State Cordell Hull, and Army Chief of Staff General George Marshall.

Then comes Barack Obama, with pleasant words about working to abolish nuclear weapons — combined with plans to spend $1 trillion on the U.S. nuclear arsenal in the next 30 years, a percentage of the military budget “comparable to spending for procurement of new strategic systems in the 1980s under President Ronald Reagan,” according to a study by the James Martin Center for Nonproliferation Studies at the Monterey Institute of International Studies.

Obama has also not hesitated to play with fire for political gain. Take for example the capture and assassination of Osama bin Laden by Navy SEALs. Obama brought it up with pride in an important speech on national security in May 2013. It was widely covered, but one crucial paragraph was ignored.

Obama hailed the operation but added that it could not be the norm. The reason, he said, was that the risks “were immense.” The SEALs might have been “embroiled in an extended firefight.” Even though, by luck, that didn’t happen, “the cost to our relationship with Pakistan and the backlash among the Pakistani public over encroachment on their territory was … severe.”

Let us now add a few details. The SEALs were ordered to fight their way out if apprehended. They would not have been left to their fate if “embroiled in an extended firefight.” The full force of the U.S. military would have been used to extricate them. Pakistan has a powerful, well-trained military, highly protective of state sovereignty. It also has nuclear weapons, and Pakistani specialists are concerned about the possible penetration of their nuclear security system by jihadi elements. It is also no secret that the population has been embittered and radicalized by Washington’s drone terror campaign and other policies.

While the SEALs were still in the bin Laden compound, Pakistani Chief of Staff Ashfaq Parvez Kayani was informed of the raid and ordered the military “to confront any unidentified aircraft,” which he assumed would be from India. Meanwhile in Kabul, U.S. war commander General David Petraeus ordered “warplanes to respond” if the Pakistanis “scrambled their fighter jets.” As Obama said, by luck the worst didn’t happen, though it could have been quite ugly. But the risks were faced without noticeable concern. Or subsequent comment.

As General Butler observed, it is a near miracle that we have escaped destruction so far, and the longer we tempt fate, the less likely it is that we can hope for divine intervention to perpetuate the miracle.

Noam Chomsky On Outrage

Almost every day brings news of awful crimes, but some are so heinous, so horrendous and malicious, that they dwarf all else. One of those rare events took place on July 17, when Malaysian Airlines MH17 was shot down in Eastern Ukraine, killing 298 people.

The Guardian of Virtue in the White House denounced it as an “outrage of unspeakable proportions,” which happened “because of Russian support.” His UN Ambassador thundered that “when 298 civilians are killed” in the “horrific downing” of a civilian plane, “we must stop at nothing to determine who is responsible and to bring them to justice.” She also called on Putin to end his shameful efforts to evade his very clear responsibility.

True, the “irritating little man” with the “ratlike face” (Timothy Garton Ash) had called for an independent investigation, but that could only have been because of sanctions from the one country courageous enough to impose them, the United States, while Europeans cower in fear.

On CNN, former U.S. Ambassador to Ukraine William Taylor assured the world that the irritating little man “is clearly responsible … for the shoot down of this airliner.” For weeks, lead stories reported the anguish of the families, details of the lives of the murdered victims, the international efforts to claim the bodies, the fury over the horrific crime that “stunned the world,” as the press reports daily in grisly detail.

Every literate person, and certainly every editor and commentator, instantly recalled another case when a plane was shot down with comparable loss of life: Iran Air 655 with 290 killed, including 66 children, shot down in Iranian airspace in a clearly identified commercial air route. The crime was not carried out “with U.S. support,” nor has its agent ever been uncertain. It was the guided-missile cruiser USS Vincennes, operating in Iranian waters in the Persian Gulf.

The commander of a nearby U.S. vessel, David Carlson, wrote in the U.S. Naval Proceedings that he “wondered aloud in disbelief” as “’The Vincennes announced her intentions” to attack what was clearly a civilian aircraft. He speculated that “Robo Cruiser,” as the Vincennes was called because of its aggressive behavior, “felt a need to prove the viability of Aegis (the sophisticated anti-aircraft system on the cruiser) in the Persian Gulf, and that they hankered for the opportunity to show their stuff.”

Two years later, the commander of the Vincennes and the officer in charge of anti-air warfare were given the Legion of Merit award for “exceptionally meritorious conduct in the performance of outstanding service” and for the “calm and professional atmosphere” during the period of the destruction of the Iranian Airbus. The incident was not mentioned in the award.

President Reagan blamed the Iranians and defended the actions of the warship, which “followed standing orders and widely publicized procedures, firing to protect itself against possible attack.” His successor, Bush I, proclaimed that “I will never apologize for the United States — I don’t care what the facts are … I’m not an apologize-for-America kind of guy.”

No evasions of responsibility here, unlike the barbarians in the East.

There was little reaction at the time: no outrage, no desperate search for victims, no passionate denunciations of those responsible, no eloquent laments by the US Ambassador to the UN about the “immense and heart-wrenching loss” when the airliner was downed. Iranian condemnations were occasionally noted, and dismissed as “boilerplate attacks on the United States.”

Small wonder, then, that this insignificant earlier event merited only a few scattered and dismissive words in the U.S. media during the vast furor over a real crime, in which the demonic enemy might (or might not) have been indirectly involved.

One exception was in the London Daily Mail, where Dominick Lawson wrote that although “Putin’s apologists” might bring up the Iran Air attack, the comparison actually demonstrates our high moral values as contrasted with the miserable Russians, who try to evade their responsibility for MH 17 with lies while Washington at once announced that the US warship had shot down the Iranian aircraft — righteously.

We know why Ukrainians and Russians are in their own countries, but one might ask what exactly the Vincennes was doing in Iranian waters. The answer is simple. It was defending Washington’s great friend Saddam Hussein in his murderous aggression against Iran. For the victims, the shoot-down was no small matter. It was a major factor in Iran’s recognition that it could not fight on any longer, according to historian Dilip Hiro.

It is worth remembering the extent of Washington’s devotion to its friend Saddam. Reagan removed him from the terrorist list so that aid could be sent to expedite his assault on Iran, and later denied his murderous crimes against the Kurds, blocking congressional condemnations. He also accorded Saddam a privilege otherwise granted only to Israel: there was no notable reaction when Iraq attacked the USS Stark with missiles, killing 37 crewmen, much like the case of the USS Liberty, attacked repeatedly by Israeli jets and torpedo ships in 1967, killing 34 crewmen.

Reagan’s successor, Bush I, went on to provide further aid to Saddam, badly needed after the war with Iran that he launched. Bush also invited Iraqi nuclear engineers to come to the US for advanced training in weapons production. In April 1990, Bush dispatched a high-level Senate delegation, led by future Republican presidential candidate Bob Dole, to convey his warm regards to his friend Saddam and to assure him that he should disregard irresponsible criticism from the “haughty and pampered press,” and that such miscreants had been removed from Voice of America. The fawning before Saddam continued until he turned into a new Hitler a few months later by disobeying orders, or perhaps misunderstanding them, and invading Kuwait, with illuminating consequences that are worth reviewing once again though I will leave the matter here.

Other precedents had long since been dismissed to the memory hole as also without significance. One example is the Libyan civilian airliner that was lost in a sandstorm in 1973 when it was shot down by US-supplied Israeli jets, two minutes flight time from Cairo, towards which it was heading. The death toll was only 110 that time. Israel blamed the French pilot, with the endorsement of the New York Times, which added that the Israeli act was “at worst … an act of callousness that not even the savagery of previous Arab actions can excuse.” The incident was passed over quickly in the United States, with little criticism. When Israeli Prime Minister Golda Meir arrived in the US four days later, she faced few embarrassing questions and returned home with new gifts of military aircraft.

The reaction was much the same when Washington’s favored Angolan terrorist organization UNITA claimed to have shot down two civilian airliners at the same time, among other cases.

Returning to the sole authentic and horrific crime, the New York Times reported that American UN ambassador Samantha Power “choked up as she spoke of infants who perished in the Malaysia Airlines crash in Ukraine [and] The Dutch foreign minister, Frans Timmermans, could barely contain his anger as he recalled seeing pictures of ‘thugs’ snatching wedding bands off the fingers of the victims.” At the same session, the report continues, there was also “a long recitation of names and ages — all belonging to children killed in the latest Israeli offensive in Gaza.” The only reported reaction was by Palestinian envoy Riyad Mansour, who “grew quiet in the middle of” the recitation.

The Israeli attack on Gaza in July did, however, elicit outrage in Washington. President Obama “reiterated his ‘strong condemnation’ of rocket and tunnel attacks against Israel by the militant group Hamas,” The Hill reported. He “also expressed ‘growing concern’ about the rising number of Palestinian civilian deaths in Gaza,” but without condemnation. The Senate filled that gap, voting unanimously to support Israeli actions in Gaza while condemning “the unprovoked rocket fire at Israel” by Hamas and calling on “Palestinian Authority President Mahmoud Abbas to dissolve the unity governing arrangement with Hamas and condemn the attacks on Israel.”

As for Congress, perhaps it’s enough to join the 80% of the public who disapprove of their performance, though the word “disapprove” is rather too mild in this case. But in Obama’s defense, it may be that he has no idea what Israel is doing in Gaza with the weapons that he was kind enough to supply to them. After all, he relies on US intelligence, which may be too busy collecting phone calls and email messages of citizens to pay much attention to such marginalia. It may be useful, then, to review what we all should know.

Israel’s goal is simple: quiet-for-quiet, a return to the norm. What then is the norm? For the West Bank, the norm is that Israel continues with its illegal construction of settlements and infrastructure so that it can integrate into Israel whatever might be of value to it, meanwhile consigning Palestinians to unviable cantons and subjecting them to intense repression and violence.

For the past 14 years, the norm is that Israel kills more than two Palestinian children a week. The latest Israeli rampage was set of by the brutal murder of three Israeli boys from a settler community in the occupied West Bank. A month before, two Palestinian boys were shot dead in the West Bank city of Ramallah. That elicited no attention, which is understandable, since it is routine. “The institutionalised disregard for Palestinian life in the West helps explain not only why Palestinians resort to violence,” the respected Middle East analyst Mouin Rabbani reports, “but also Israel’s latest assault on the Gaza Strip.”

Quiet-for-quiet also enables Israel to carry forward its program of separating Gaza from the West Bank. That program has been pursued vigorously, always with US support, ever since the US and Israel accepted the Oslo accords, which declare the two regions to be an inseparable territorial unity. A look at the map explains the rationale. Gaza provides Palestine’s only access to the outside world, so once the two are separated, any autonomy that Israel might grant to Palestinians in the West Bank would leave them effectively imprisoned between hostile states, Israel and Jordan. The imprisonment will become even more severe as Israel continues its program of expelling Palestinians from the Jordan Valley and constructing Israeli settlements there.

The norm in Gaza was described in detail by the heroic Norwegian trauma surgeon Mads Gilbert, who has worked in Gaza’s main hospital through Israel’s most grotesque crimes and returned again for the current onslaught. In June 2014 he submitted a report on the Gaza health sector to UNRWA, the UN Agency that tries desperately, on a shoestring, to care for refugees.

“At least 57 % of Gaza households are food insecure and about 80 % are now aid recipients,” Gilbert reports. “Food insecurity and rising poverty also mean that most residents cannot meet their daily caloric requirements, while over 90 % of the water in Gaza has been deemed unfit for human consumption,” a situation that is becoming even worse as Israel again attacks water and sewage systems, leaving 1.2 million people with even more severe disruption of the barest necessity of life.

Gilbert reports that “Palestinian children in Gaza are suffering immensely. A large proportion are affected by the man-made malnourishment regime caused by the Israeli imposed blockage. Prevalence of anaemia in children

The distinguished human rights lawyer Raji Sourani, who has remained in Gaza through years of Israeli brutality and terror, writes that “The most common sentence I heard when people began to talk about ceasefire: everybody says it’s better for all of us to die and not go back to the situation we used to have before this war. We don’t want that again. We have no dignity, no pride; we are just soft targets, and we are very cheap. Either this situation really improves or it is better to just die. I am talking about intellectuals, academics, ordinary people: everybody is saying that.”

Similar sentiments have been widely heard: it is better to die with dignity than to be slowly strangled by the torturer.

For Gaza, the plans for the norm were explained forthrightly by Dov Weissglass, the confidant of Ariel Sharon who negotiated the withdrawal of Israeli settlers from Gaza in 2005. Hailed as a grand gesture in Israel and among acolytes and the deluded elsewhere, the withdrawal was in reality a carefully staged “national trauma,” properly ridiculed by informed Israeli commentators, among them Israel’s leading sociologist, the late Baruch Kimmerling.

What actually happened is that Israeli hawks, led by Sharon, realized that it made good sense to transfer the illegal settlers from their subsidized communities in devastated Gaza to subsidized settlements in the other occupied territories, which Israel intends to keep. But instead of simply transferring them, as would have been simple enough, it was considered more effective to present the world with images of little children pleading with soldiers not to destroy their homes, amidst cries of “Never Again,” with the implication obvious. What made the farce even more transparent was that it was a replica of the staged trauma when Israel had to evacuate the Egyptian Sinai in 1982. But it played very well for the intended audience abroad.

In Weissglass’s own description of the transfer of settlers from Gaza to other occupied territories, “What I effectively agreed to with the Americans was that [the major settlement blocs in the West Bank] would not be dealt with at all, and the rest will not be dealt with until the Palestinians turn into Finns” – but a special kind of Finns, who would accept rule by a foreign power. “The significance is the freezing of the political process,” Weissglass continued. “And when you freeze that process you prevent the establishment of a Palestinian state and you prevent a discussion about the refugees, the borders and Jerusalem. Effectively, this whole package that is called the Palestinian state, with all that it entails, has been removed from our agenda indefinitely. And all this with [President Bush’s] authority and permission and the ratification of both houses of Congress.”

Weisglass added that Gazans would remain “on a diet, but not to make them die of hunger” – which would not help Israel’s fading reputation. With their vaunted technical efficiency, Israeli experts determined exactly how many calories a day Gazans needed for bare survival, while also depriving them of medicines, construction materials, or other means of decent life. Israeli military forces confined them by land, sea and air to what British Prime Minister David Cameron accurately described as a prison camp. The Israeli withdrawal left Israel in total control of Gaza, hence the occupying power under international law.

The official story is that after Israel graciously handed Gaza over to the Palestinians, in the hope that they would construct a flourishing state, they revealed their true nature by subjecting Israel to unremitting rocket attack and forcing the captive population to become martyrs to so that Israel would be pictured in a bad light. Reality is rather different.

A few weeks after Israeli troops withdrew, leaving the occupation intact, Palestinians committed a major crime. In January 2006, they voted the wrong way in a carefully monitored free election, handing control of the Parliament to Hamas. The media constantly intone that Hamas is dedicated to the destruction of Israel. In reality, its leaders have repeatedly made it clear and explicit that Hamas would accept a two-state settlement in accord with the international consensus that has been blocked by the US and Israel for 40 years. In contrast, Israel is dedicated to the destruction of Palestine, apart from some occasional meaningless words, and is implementing that commitment.

True, Israel accepted the Road Map for reaching a two-state settlement initiated by President Bush and adopted by the Quartet that is to supervise it: the US, the European Union, the United Nations, and Russia. But as he accepted the Road Map, Prime Minister Sharon at once added fourteen reservations that effectively nullify it. The facts were known to activists, but revealed to the general public for the first time in Jimmy Carter’s book “Palestine: Peace not Apartheid.” They remain under wraps in media reporting and commentary.

The (unrevised) 1999 platform of Israel’s governing party, Binyamin Netanyahu’s Likud, “flatly rejects the establishment of a Palestinian Arab state west of the Jordan river.” And for those who like to obsess about meaningless charters, the core component of Likud, Menahem Begin’s Herut, has yet to abandon its founding doctrine that the territory on both sides of the Jordan is part of the Land of Israel.

The crime of the Palestinians in January 2006 was punished at once. The US and Israel, with Europe shamefully trailing behind, imposed harsh sanctions on the errant population and Israel stepped up its violence. By June, when the attacks sharply escalated, Israel had already fired more than 7700 [155 mm] shells at northern Gaza.

The US and Israel quickly initiated plans for a military coup to overthrow the elected government. When Hamas had the effrontery to foil the plans, the Israeli assaults and the siege became far more severe, justified by the claim that Hamas had taken over the Gaza Strip by force – which is not entirely false, though something rather crucial is omitted.

There should be no need to review again the horrendous record since. The relentless siege and savage attacks are punctuated by episodes of “mowing the lawn,” to borrow Israel’s cheery expression for its periodic exercises of shooting fish in a pond in what it calls a “war of defense.” Once the lawn is mowed and the desperate population seeks to reconstruct somehow from the devastation and the murders, there is a cease-fire agreement. These have been regularly observed by Hamas, as Israel concedes, until Israel violates them with renewed violence.

The most recent cease-fire was established after Israel’s October 2012 assault. Though Israel maintained its devastating siege, Hamas observed the cease-fire, as Israel again concedes. Matters changed in June, when Fatah and Hamas forged a unity agreement, which established a new government of technocrats that had no Hamas participation and accepted all of the demands of the Quartet. Israel was naturally furious, even more so when even Obama joined the West in signaling approval. The unity agreement not only undercuts Israel’s claim that it cannot negotiate with a divided Palestine, but also threatens the long term goal of dividing Gaza from the West Bank and pursuing its destructive policies in both of the regions.

Something had to be done, and an occasion arose shortly after, when the three Israeli boys were murdered in the West Bank. The Netanyahu government knew at once that they were dead, but pretended otherwise, which provided the opportunity to launch a rampage in the West Bank, targeting Hamas. Netanhayu claimed to have certain knowledge that Hamas was responsible. That too was a lie, as recognized early on. There has been no pretense of presenting evidence. One of Israel’s leading authorities on Hamas, Shlomi Eldar, reported almost at once that the killers very likely came from a dissident clan in Hebron that has long been a thorn in the side of Hamas. Eldar added that “I’m sure they didn’t get any green light from the leadership of Hamas, they just thought it was the right time to act.” The Israeli police have since been searching for two members of the clan, still claiming, without evidence, that they are “Hamas terrorists.”

The 18-day rampage however did succeed in undermining the feared unity government, and sharply increasing Israeli repression. According to Israeli military sources, Israeli soldiers arrested 419 Palestinians, including 335 affiliated with Hamas, and killed six Palestinians, also searching thousands of locations and confiscating $350,000. Israel also conducted dozens of attacks in Gaza, killing 5 Hamas members on July 7.

Hamas finally reacted with its first rockets in 19 months, providing Israel with the pretext for Operation Edge on July 8.

There has been ample reporting of the exploits of the self-declared Most Moral Army in the World, which should receive the Nobel Peace Prize according to Israel’s Ambassador to the US. By July 26, over 1000 Palestinians had been killed, 70% of them civilians including hundreds of women and children. And 3 Israeli civilians. By then, large areas of Gaza had been turned into rubble. During brief bombing pauses, relatives desperately seek shattered bodies or household items in the ruins of homes. Four hospitals had been attacked, each yet another war crime. The main power plant was attacked, sharply curtailing the already very limited electricity and worse still, reducing still further the minimal availability of fresh water. Another war crime. Meanwhile rescue teams and ambulances are repeatedly attacked. The atrocities mount throughout Gaza, while Israel claims that its goal is to destroy tunnels at the border.

Israeli officials laud the humanity of the army, which informs residents that their homes will be bombed. The practice is “sadism, sanctimoniously disguising itself as mercy,” in the words of Israeli journalist Amira Hass: “A recorded message demanding hundreds of thousands of people leave their already targeted homes, for another place, equally dangerous, 10 kilometers away.” In fact, there is no place in the prison safe from Israeli sadism, which may even exceed the terrible crimes of Operation Cast Lead in 2008-9.

The hideous revelations elicited the usual reaction from the Most Moral President in the World: great sympathy for Israelis, bitter condemnation of Hamas, and calls for moderation by sides.

When the current episode of sadism is called off, Israel hopes to be free to pursue its criminal policies in the occupied territories without interference, and with the US support it has enjoyed in the past: military, economic, and diplomatic; and also ideological, by framing the issues in conformity to Israeli doctrines. Gazans will be free to return to the norm in their Israeli-run prison, while in the West Bank they can watch in peace as Israel dismantles what remains of their possessions.

That is the likely outcome if the US maintains its decisive and virtually unilateral support for Israeli crimes and its rejection of the longstanding international consensus on diplomatic settlement. But the future will be quite different if the US withdraws that support. In that case it would be possible to move towards the “enduring solution” in Gaza that Secretary of State Kerry called for, eliciting hysterical condemnation in Israel because the phrase could be interpreted as calling for an end to Israel’s siege and regular attacks. And – horror of horrors – the phrase might even be interpreted as calling for implementation of international law in the rest of the occupied territories.

It is not that Israel’s security would be threatened by adherence to international law; it would very likely be enhanced. But as explained 40 years ago by Israeli general Ezer Weizman, later President, Israel could then not “exist according to the scale, spirit, and quality she now embodies.”

There are similar cases in recent history. Indonesian generals swore that they would never abandon what Australian Foreign Minister Gareth Evans called “the Indonesian Province of East Timor” as he was making a deal to steal Timorese oil. And as long as they retained US support through decades of virtually genocidal slaughter, their goals were realistic. In September 1999, under considerable domestic and international pressure, President Clinton finally informed them quietly that the game was over and they instantly withdrew – while Evans turned to his new career as the lauded apostle of “Responsibility to Protect,” to be sure, in a version designed to permit western resort to violence at will.

Another relevant case is South Africa. In 1958, South Africa’s foreign minister informed the US ambassador that although his country was becoming a pariah state, it would not matter as long as the US support continued. His assessment proved fairly accurate. Thirty years later, Reagan was the last significant holdout in supporting the apartheid regime. Within a few years, Washington joined the world, and the regime collapsed – not for that reason alone of course; one crucial factor was the remarkable Cuban role in the liberation of Africa, generally ignored in the West though not in Africa.

Forty years ago Israel made the fateful decision to choose expansion over security, rejecting a full peace treaty offered by Egypt in return for evacuation from the occupied Egyptian Sinai, where Israel was initiating extensive settlement and development projects. It has adhered to that policy ever since, making essentially the same judgment as South Africa did in 1958.

In the case of Israel, if the US decided to join the world, the impact would be far greater. Relations of power allow nothing else, as has been demonstrated over and over when Washington has demanded that Israel abandon cherished goals. Furthermore, Israel by now has little recourse, after having adopted policies that turned it from a country that was greatly admired to one that is feared and despised, policies it is pursuing with blind determination today in its resolute march towards moral deterioration and possible ultimate destruction.

Could US policy change? It’s not impossible. Public opinion has shifted considerably in recent years, particularly among the young, and it cannot be completely ignored. For some years there has been a good basis for public demands that Washington observe its own laws and cut off military aid to Israel. US law requires that “no security assistance may be provided to any country the government of which engages in a consistent pattern of gross violations of internationally recognized human rights.” Israel most certainly is guilty of this consistent pattern, and has been for many years. That is why Amnesty International, in the course of Israel’s murderous Cast Lead operation in Gaza, called for an arms embargo against Israel (and Hamas). Senator Patrick Leahy, author of this provision of the law, has brought up its potential applicability to Israel in specific cases, and with a well-conducted educational, organizational, and activist effort such initiatives could be pursued successively. That could have a very significant impact in itself, while also providing a springboard for further actions to compel Washington to become part of “the international community” and to observe international law and norms.

Nothing could be more significant for the tragic Palestinian victims of many years of violence and repression.

  • By :Noam Chomsky
  • chomsky.info, August 14, 2014

How Could Language Have Evolved?

Abstract

The evolution of the faculty of language largely remains an enigma. In this essay, we ask why. Language’s evolutionary analysis is complicated because it has no equivalent in any nonhuman species. There is also no consensus regarding the essential nature of the language “phenotype.” According to the “Strong Minimalist Thesis,” the key distinguishing feature of language (and what evolutionary theory must explain) is hierarchical syntactic structure. The faculty of language is likely to have emerged quite recently in evolutionary terms, some 70,000–100,000 years ago, and does not seem to have undergone modification since then, though individual languages do of course change over time, operating within this basic framework. The recent emergence of language and its stability are both consistent with the Strong Minimalist Thesis, which has at its core a single repeatable operation that takes exactly two syntactic elements a and b and assembles them to form the set {a, b}.

Citation: Bolhuis JJ, Tattersall I, Chomsky N, Berwick RC (2014) How Could Language Have Evolved? PLoS Biol 12(8): e1001934.

DOI: 10.1371/journal.pbio.1001934

Published: August 26, 2014

Copyright: © 2014 Bolhuis et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Funding: JJB is funded by Utrecht University and by Netherlands Organization for Scientific Research (NWO) grants (ALW Open Competition and NWO Gravity and Horizon Programmes) (http://www.nwo.nl/). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing interests: The authors have declared that no competing interests exist.

It is uncontroversial that language has evolved, just like any other trait of living organisms. That is, once—not so long ago in evolutionary terms—there was no language at all, and now there is, at least in Homo sapiens. There is considerably less agreement as to how language evolved. There are a number of reasons for this lack of agreement. First, “language” is not always clearly defined, and this lack of clarity regarding the language phenotype leads to a corresponding lack of clarity regarding its evolutionary origins. Second, there is often confusion as to the nature of the evolutionary process and what it can tell us about the mechanisms of language. Here we argue that the basic principle that underlies language’s hierarchical syntactic structure is consistent with a relatively recent evolutionary emergence.

Conceptualizations of Language

The language faculty is often equated with “communication”—a trait that is shared by all animal species and possibly also by plants. In our view, for the purposes of scientific understanding, language should be understood as a particular computational cognitive system, implemented neurally, that cannot be equated with an excessively expansive notion of “language as communication” [1]. Externalized language may be used for communication, but that particular function is largely irrelevant in this context. Thus, the origin of the language faculty does not generally seem to be informed by considerations of the evolution of communication. This viewpoint does not preclude the possibility that communicative considerations can play a role in accounting for the maintenance of language once it has appeared or for the historical language change that has clearly occurred within the human species, with all individuals sharing a common language faculty, as some mathematical models indicate [1]–[3]. A similar misconception is that language is coextensive with speech and that the evolution of vocalization or auditory-vocal learning can therefore inform us about the evolution of language (Box 1) [1],[4]. However, speech and speech perception, while functioning as possible external interfaces for the language system, are not identical to it. An alternative externalization of language is in the visual domain, as sign language [1]; even haptic externalization by touch seems possible in deaf and blind individuals [5]. Thus, while the evolution of auditory-vocal learning may be relevant for the evolution of speech, it is not for the language faculty per se. We maintain that language is a computational cognitive mechanism that has hierarchical syntactic structure at its core [1], as outlined in the next section.

Box 1. Comparative Linguistics: Not Much to Compare

A major stumbling block for the comparative analysis of language evolution is that, so far, there is no evidence for human-like language syntax in any nonhuman species [4],[41],[42]. There is no a priori reason why a version of such a combinatorial computational system could not have evolved in nonhuman animals, either through common descent (e.g., apes) or convergent evolution (e.g., songbirds) [1],[18]. Although the auditory-vocal domain is just one possible external interface for language (with signing being another), it could be argued that the strongest animal candidates for human-like syntax are songbirds and parrots [1],[41],[42]. Not only do they have a similar brain organization underlying auditory-vocal behavior [4],[43],[44], they also exhibit vocal imitation learning that proceeds in a very similar way to speech acquisition in human infants [4],[41],[42]. This ability is absent in our closest relatives, the great apes [1],[4]. In addition, like human spoken language, birdsong involves patterned vocalizations that can be quite complex, with a set of rules that govern variable song element sequences known as “phonological syntax” [1],[4],[41],[42],[45]. Contrary to recent suggestions [46],[47], to date there is no evidence to suggest that birdsong patterns exhibit the hierarchical syntactic structure that characterizes human language [41],[48],[49] or any mapping to a level forming a language of thought as in humans. Avian vocal-learning species such as parrots are able to synchronize their behavior to variable rhythmic patterns [50]. Such rhythmic abilities may be involved in human prosodic processing, which is known to be an important factor in language acquisition [51].

The Faculty of Language According to the “Strong Minimalist Thesis”

In the last few years, certain linguistic theories have arrived at a much more narrowly defined and precise phenotype characterizing human language syntax. In place of a complex rule system or accounts grounded on general notions of “culture” or “communication,” it appears that human language syntax can be defined in an extremely simple way that makes conventional evolutionary explanations much simpler. In this view, human language syntax can be characterized via a single operation that takes exactly two (syntactic) elements a and b and puts them together to form the set {a, b}. We call this basic operation “merge” [1]. The “Strong Minimalist Thesis” (SMT) [6] holds that merge along with a general cognitive requirement for computationally minimal or efficient search suffices to account for much of human language syntax. The SMT also requires two mappings: one to an internal conceptual interface for thought and a second to a sensory-motor interface that externalizes language as speech, sign, or other modality [1]. The basic operation itself is simple. Given merge, two items such as the and apples are assembled as the set {the, apples}. Crucially, merge can apply to the results of its own output so that a further application of merge to ate and {the, apples} yields the set {ate, {the, apples}}, in this way deriving the full range of characteristic hierarchical structure that distinguishes human language from all other known nonhuman cognitive systems.

As the text below and Figure 1 shows, merge also accounts for the characteristic appearance of displacement in human language—the apparent “movement” of phrases from one position to another. Displacement is not found in artificially constructed languages like computer programming languages and raises difficulties for parsing as well as communication. On the SMT account, however, displacement arises naturally and is to be expected, rather than exceptional, as seems true in every human language that has been examined carefully. Furthermore, hierarchical language structure is demonstrably present in humans, as shown, for instance, by online brain imaging experiments [7], but absent in nonhuman species, e.g., chimpanzees taught sign language demonstrably lack this combinatorial ability [8]. Thus, before the appearance of merge, there was no faculty of language as such, because this requires merge along with the conceptual atoms of the lexicon. Absent this, there is no way to arrive at the essentially infinite number of syntactic language structures, e.g., “the brown cow,” “a black cat behind the mat” [9]–[11], etc. This view leaves room for the possibility that some conceptual atoms were present antecedent to merge itself, though at present this remains entirely speculative. Even if true, there seems to be no evidence for an antecedent combinatorial and hierarchical syntax. Furthermore, merge itself is uniform in the contemporary human population as well as in the historical record, in contrast to human group differences such as the adult ability to digest lactose or skin pigmentation [12]. There is no doubt that a normal child from England raised in northern Alaska would readily learn Eskimo-Aleut, or vice versa; there have been no confirmed group differences in the ability of children to learn their first language, despite one or two marginal, indirect, and as yet unsubstantiated correlative indications [13]. This uniformity and stability points to the absence of major evolutionary change since the emergence of the language faculty. Taken together, these facts provide good evidence that merge was indeed the key evolutionary innovation for the language faculty.

article-20140828-01

 

Figure 1. The binary operation of merge (X,Y) when Y is a subset of X leads to the ubiquitous phenomenon of “displacement” in human language, as in Guess what boys eat. Left: The circled structure Y, corresponding to what, the object of the verb eat, is a subset of the circled structure X, corresponding to boys eat what. Right: The free application of merge to X, Y in this case automatically leads to what occupying two syntactic positions, as required for proper semantic interpretation. The original what remains as the object of the verb so that it can serve as an argument to this predicate, and a copy of what, “displaced,” is now in the position of a quantificational operator so that the form can be interpreted as “for what x, boys eat x.” Typically, only the higher what is actually pronounced, as indicated by the line drawn through the lower what. doi:10.1371/journal.pbio.1001934.g001 It is sometimes suggested that external motor sequences are “hierarchical” in this sense and so provide an antecedent platform for language [14]. However, as has been argued [15], motor sequences resemble more the “sequence of letters in the alphabet than the sequences of words in a sentence” ([15], p. 221). (For expository purposes, we omit here several technical linguistic details about the labelling of these words; see [16].) Along with the conceptual atoms of the lexicon, the SMT holds that merge, plus the internal interface mappings to the conceptual system, yields what has been called the “language of thought” [17].

More narrowly, the SMT also suffices to automatically derive some of the most central properties of human language syntax. For example, one of the most distinctive properties of human language syntax is that of “displacement,” along with what is sometimes called “duality of semantic patterning.” For example, in the sentence “(Guess) what boys eat,” “what” takes on a dual role and is interpreted in two places: first, as a question “operator” at the front of the sentence, where it is pronounced; and second, as a variable that serves as the argument of the verb eat, the thing eaten, where it is not pronounced (Figure 1). (There are marginal exceptions to the nonpronunciation of the second “what” that, when analyzed carefully, support the picture outlined here.) Given the free application of merge, we expect human languages to exhibit this phenomenon of displacement without any further stipulation. This is simply because operating freely, without any further constraints, merge derives this possibility. In our example “(Guess) what boys eat,” we assume that successive applications of merge as in our earlier example will first derive {boys, {eat, what}}—analogous to {boys, {eat, apples}}. Now we note that one can simply apply merge to the two syntactic objects {boys,{eat, what}} and {what}, in which {what} is a subcomponent (a subset) of the first syntactic object rather than some external set. This yields something like {what, {boys, {eat, what}}}, in this way marking out the two required operator and variable positions for what.

The Nature of Evolution

Evolutionary analysis might be brought to bear on language in two different ways. First, evolutionary considerations could be used to explain the mechanisms of human language. For instance, principles derived from studying the evolution of communication might be used to predict, or even explain, the structural organization of language. This approach is fraught with difficulties. Questions of evolution or function are fundamentally different from those relating to mechanism, so evolution can never “explain” mechanisms [18]. For a start, the evolution of a particular trait may have proceeded in different ways, such as via common descent, convergence, or exaptation, and it is not easy to establish which of these possibilities (or combination of them) is relevant [18],[19]. More importantly, evolution by natural selection is not a causal factor of either cognitive or neural mechanisms [18]. Natural selection can be seen as one causal factor for the historical process of evolutionary change, but that is merely stating the essence of the theory of evolution. As we have argued, communication cannot be equated with language, so its evolution cannot inform the mechanisms of language syntax. However, evolutionary considerations—in particular, reconstructing the evolutionary history of relevant traits—might provide clues or hypotheses as to mechanisms, even though such hypotheses have frequently been shown to be false or misleading [18]. One such evolutionary clue is that, contrary to received wisdom, recent analyses suggest that significant genetic change may occur in human populations over the course of a few hundred years [19]. Such rapid change could also have occurred in the case of language, as we will argue below. In addition, as detailed in the next section, paleoanthropological evidence suggests that the appearance of symbolic thought, our most accurate proxy for language, was a recent evolutionary event. For instance, the first evidence of putatively symbolic artifacts dates back to only around 100,000 years ago, significantly after the appearance on the planet of anatomically distinctive Homo sapiens around 200,000 years ago [20],[21],

The second, more traditional way of applying evolutionary analysis to language is to attempt to reconstruct its evolutionary history. Here, too, we are confronted with major explanatory obstacles. For starters, language appears to be unique to the species H. sapiens. That eliminates one of the cornerstones of evolutionary analysis, the comparative method, which generally relies on features that are shared by virtue of common descent (Box 1) [1],[4],[18]. Alternatively, analysis can appeal to convergent evolution, in which similar features, such as birds’ wings and bats’ wings, arise independently to “solve” functionally analogous problems. Both situations help constrain and guide evolutionary explanation. Lacking both, as in the case of language, makes the explanatory search more difficult. In addition, evolutionary analysis of language is often plagued by popular, naïve, or antiquated conceptions of how evolution proceeds [19],[22]. That is, evolution is often seen as necessarily a slow, incremental process that unfolds gradually over the eons. Such a view of evolutionary change is not consistent with current evidence and our current understanding, in which evolutionary change can be swift, operating within just a few generations, whether it be in relation to finches’ beaks on the Galapagos, insect resistance to pesticides following WWII, or human development of lactose tolerance within dairy culture societies, to name a few cases out of many [19],[22]–[24].

Paleoanthropology

Language leaves no direct imprint in the fossil record, and the signals imparted by putative morphological proxies are highly mixed. Most of these involve speech production and detection, neither of which by itself is sufficient for inferring language (see Box 2). After all, while the anatomical potential to produce the frequencies used in modern speech may be necessary for the expression of language, it provides no proof that language itself was actually employed. What is more, it is not even necessary for language, as the visual and haptic externalization routes make clear. Moreover, even granting that speech is a requirement for language, it has been argued convincingly [25],[26] that equal proportions of the horizontal and vertical portions of the vocal tract are necessary for producing speech. This conformation is uniquely seen in our own species Homo sapiens. In a similar vein, the aural ability of nonhuman primates like chimpanzees or extinct hominid species such as H. neanderthalensis to perceive the sound frequencies associated with speech [26],[27] says nothing about the ability of these relatives to understand or produce language. Finally, neither the absolute size of the brain nor its external morphology as seen in endocasts has been shown to be relevant to the possession of language in an extinct hominid (Figure 2) [28]. Recent research has determined that Neanderthals possessed the modern version of the FOXP2 gene [29], malfunctions in which produce speech deficits in modern people [4],[30]. However, FOXP2 cannot be regarded as “the” gene “for” language, since it is only one of many that have to be functioning properly to permit its normal expression.

article-20140828-02

 

Figure 2. A crude plot of average hominid brain sizes over time. Although after an initial flatlining this plot appears to show consistent enlargement of hominid brains over the last 2 million years, it is essential to note that these brain volumes are averaged across a number of independent lineages within the genus Homo and likely represent the preferential success of larger-brained species. From [20]. Image credit: Gisselle Garcia, artist (brain images). doi:10.1371/journal.pbio.1001934.g002 Box 2. The Infamous Hyoid Bone

A putative relationship between basicranial flexion, laryngeal descent, and the ability to produce sounds essential to speech was suggested [52] before any fossil hyoid bones, the sole hard-tissue components of the laryngeal apparatus, were known. It was speculated that fossil hyoids would indicate when speech, and by extension language, originated. A Neanderthal hyoid from Kebara in Israel eventually proved very similar to its H. sapiens homologue, prompting the declaration that speech capacity was fully developed in adult H. neanderthalensis [53]. This was soon contested on the grounds that the morphology of the hyoid is both subsidiary [25] and unrelated [26] to its still-controversial [36] position in the neck. A recent study [54] focuses on the biomechanics, internal architecture, and function of the Kebara fossil. The authors conclude that their results “add support for the proposition that the Kebara 2 Neanderthal engaged in speech” ([54], p. 6). However, they wisely add that the issue of Neanderthal language will be fully resolved only on the basis of fuller comparative material. While the peripheral ability to produce speech is undoubtedly a necessary condition for the expression of vocally externalized language, it is not a sufficient one, and hyoid morphology, like most other lines of evidence, is evidently no silver bullet for determining when human language originated.

In terms of historically calibrated records, this leaves us only with archaeology, the archive of ancient human behaviors—although we have once again to seek indirect proxies for language. To the extent that language is interdependent with symbolic thought [20], the best proxies in this domain are objects that are explicitly symbolic in nature. Opinions have varied greatly as to what constitutes a symbolic object, but if one excludes stone and other Paleolithic implements from this category on the fairly firm grounds that they are pragmatic and that the techniques for making them can be passed along strictly by imitation [31], we are left with objects from the African Middle Stone Age (MSA) such as pierced shell beads from various ~100,000-year-old sites (e.g., [32]) and the ~80,000-year-old geometrically engraved plaques from South Africa’s Blombos Cave [33] as the earliest undisputed symbolic objects. Such objects began to be made only substantially after the appearance, around 200,000 years ago, of anatomically recognizable H. sapiens, also in Africa [34]. To be sure, this inference from the symbolic record, like much else in paleontology, rests on evidence that is necessarily quite indirect. Nevertheless, the conclusion lines up with what is known from genomics.

Our species was born in a technologically archaic context [35], and significantly, the tempo of change only began picking up after the point at which symbolic objects appeared. Evidently, a new potential for symbolic thought was born with our anatomically distinctive species, but it was only expressed after a necessary cultural stimulus had exerted itself. This stimulus was most plausibly the appearance of language in members of a species that demonstrably already possessed the peripheral vocal apparatus required to externalize it [20],[22]. Then, within a remarkably short space of time, art was invented, cities were born, and people had reached the moon. By this reckoning, the language faculty is an extremely recent acquisition in our lineage, and it was acquired not in the context of slow, gradual modification of preexisting systems under natural selection but in a single, rapid, emergent event that built upon those prior systems but was not predicted by them. It may be relevant to note that the anatomical ability to express language through speech was acquired at a considerable cost, namely the not-insignificant risk of adults choking to death [25],[36], as simultaneous breathing and swallowing became impossible with the descent of the larynx. However, since this conformation was already in place before language had demonstrably been acquired (see Box 2), the ability to express language cannot by itself have been the countervailing advantage. Finally, there has been no detectable evolution of the language faculty since it emerged, with no known group differences. This is another signature of relatively recent and rapid origin. For reasons like these, the relatively sudden origin of language poses difficulties that may be called “Darwin’s problem.”

The Minimalist Account of Language—Progress towards Resolving “Darwin’s Problem”

The Strong Minimalist Thesis (SMT) [6], as discussed above, greatly eases the explanatory burden for evolutionary analysis, since virtually all of the antecedent “machinery” for language is presumed to have been present long before the human species appeared. For instance, it appears that the ability to perceive “distinctive features” such as the difference between the sound b, as in bat, as opposed to p, as in pat, might be present in the mammalian lineage generally [37],[38]. The same holds for audition. Both comprise part of the externalization system for language. Furthermore, the general constraint of efficient computation would also seem plausibly antecedent in the cognitive computation of ancestral species. The only thing lacking for language would be merge, some specific way to externalize the internal computations and, importantly, the “atomic conceptual elements” that we have identified with words. Without merge, there would be no way to assemble the arbitrarily large, hierarchically structured objects with their specific interpretations in the language of thought that distinguish human language from other animal cognitive systems—just as Darwin insisted: “A complex train of thought can be no more carried out without the use of words, whether spoken or silent, than a long calculation without the use of figures or algebra” ([39], p. 88). With merge, however, the basic properties of human language emerge. Evolutionary analysis can thus be focused on this quite narrowly defined phenotypic property, merge itself, as the chief bridge between the ancestral and modern states for language. Since this change is relatively minor, it accords with what we know about the apparent rapidity of language’s emergence.

Conclusions

The Strong Minimalist Thesis that we have sketched here is consistent with a recent and rapid evolutionary emergence of language. Although this thesis is far from being established and contains many open questions, it offers an account that is compatible with the known empirical evolutionary evidence. Such an account also aligns with what we currently know about the relatively few genomic differences between our species and other ancestral Homo species—e.g., only about 100 coding gene differences between Homo sapiens and H. neanderthalensis, the majority of them in nonlanguage areas such as the olfactory and immune systems [40]. Furthermore, as far as we can tell from direct historical evidence, the capacity that emerged, namely the ability of any child to learn any human language, has remained frozen for 10,000 years or more. To be sure, such observations must be interpreted with great care and can remain only suggestive as long as we lack the knowledge to even crudely connect genomic changes to the relevant phenotypes. Even given these caveats, it appears that there has simply not been enough time for large-scale evolutionary changes, as indicated by the SMT. Clearly, such a novel computational system could have led to a large competitive advantage among the early H. sapiens who possessed it, particularly when linked to possibly preexisting perceptual and motor mechanisms.

  • Johan J. Bolhuis, Ian Tattersall, Noam Chomsky, Robert C. Berwick
  • PLOS Biology, August 26, 2014
  • Citation: Bolhuis JJ, Tattersall I, Chomsky N, Berwick RC (2014) How Could Language Have Evolved?
    PLoS Biol 12(8): e1001934. doi:10.1371/journal.pbio.1001934

The Julian Assange Show: Noam Chomsky & Tariq Ali

A surprise Arab drive for freedom, the West’s structural crisis and new hope coming from Latin America. That’s the modern world in the eyes of Noam Chomsky and Tariq Ali, two prominent thinkers and this week’s guests on Julian Assange’s show on RT.

 

capture
If you’ve missed the previous episodes, you can always watch them online at http://assange.RT.com

Subscribe to RT! http://www.youtube.com/subscription_c…

Watch RT LIVE on our website http://rt.com/on-air

Like us on Facebook http://www.facebook.com/RTnews
Follow us on Twitter http://twitter.com/RT_com
Follow us on Google+ http://plus.google.com/b/102728491539…

RT (Russia Today) is a global news network broadcasting from Moscow and Washington studios. RT is the first news channel to break the 500 million YouTube views benchmark.

Noam chomsky: The End of History?

The short, strange era of human civilization would appear to be drawing to a close

It is not pleasant to contemplate the thoughts that must be passing through the mind of the Owl of Minerva as the dusk falls and she undertakes the task of interpreting the era of human civilization, which may now be approaching its inglorious end.

The era opened almost 10,000 years ago in the Fertile Crescent, stretching from the lands of the Tigris and Euphrates, through Phoenicia on the eastern coast of the Mediterranean to the Nile Valley, and from there to Greece and beyond. What is happening in this region provides painful lessons on the depths to which the species can descend.

The land of the Tigris and Euphrates has been the scene of unspeakable horrors in recent years. The George W. Bush-Tony Blair aggression in 2003, which many Iraqis compared to the Mongol invasions of the 13th century, was yet another lethal blow. It destroyed much of what survived the Bill Clinton-driven U.N. sanctions on Iraq, condemned as “genocidal” by the distinguished diplomats Denis Halliday and Hans von Sponeck, who administered them before resigning in protest. Halliday and von Sponeck’s devastating reports received the usual treatment accorded to unwanted facts.

One dreadful consequence of the U.S.-U.K. invasion is depicted in a New York Times “visual guide to the crisis in Iraq and Syria”: the radical change of Baghdad from mixed neighborhoods in 2003 to today’s sectarian enclaves trapped in bitter hatred. The conflicts ignited by the invasion have spread beyond and are now tearing the entire region to shreds.

Much of the Tigris-Euphrates area is in the hands of ISIS and its self-proclaimed Islamic State, a grim caricature of the extremist form of radical Islam that has its home in Saudi Arabia. Patrick Cockburn, a Middle East correspondent for The Independent and one of the best-informed analysts of ISIS, describes it as “a very horrible, in many ways fascist organization, very sectarian, kills anybody who doesn’t believe in their particular rigorous brand of Islam.”

Cockburn also points out the contradiction in the Western reaction to the emergence of ISIS: efforts to stem its advance in Iraq along with others to undermine the group’s major opponent in Syria, the brutal Bashar Assad regime. Meanwhile a major barrier to the spread of the ISIS plague to Lebanon is Hezbollah, a hated enemy of the U.S. and its Israeli ally. And to complicate the situation further, the U.S. and Iran now share a justified concern about the rise of the Islamic State, as do others in this highly conflicted region.

Egypt has plunged into some of its darkest days under a military dictatorship that continues to receive U.S. support. Egypt’s fate was not written in the stars. For centuries, alternative paths have been quite feasible, and not infrequently, a heavy imperial hand has barred the way.

After the renewed horrors of the past few weeks it should be unnecessary to comment on what emanates from Jerusalem, in remote history considered a moral center.

Eighty years ago, Martin Heidegger extolled Nazi Germany as providing the best hope for rescuing the glorious civilization of the Greeks from the barbarians of the East and West. Today, German bankers are crushing Greece under an economic regime designed to maintain their wealth and power.

The likely end of the era of civilization is foreshadowed in a new draft report by the Intergovernmental Panel on Climate Change (IPCC), the generally conservative monitor of what is happening to the physical world.

The report concludes that increasing greenhouse gas emissions risk “severe, pervasive and irreversible impacts for people and ecosystems” over the coming decades. The world is nearing the temperature when loss of the vast ice sheet over Greenland will be unstoppable. Along with melting Antarctic ice, that could raise sea levels to inundate major cities as well as coastal plains.

The era of civilization coincides closely with the geological epoch of the Holocene, beginning over 11,000 years ago. The previous Pleistocene epoch lasted 2.5 million years. Scientists now suggest that a new epoch began about 250 years ago, the Anthropocene, the period when human activity has had a dramatic impact on the physical world. The rate of change of geological epochs is hard to ignore.

One index of human impact is the extinction of species, now estimated to be at about the same rate as it was 65 million years ago when an asteroid hit the Earth. That is the presumed cause for the ending of the age of the dinosaurs, which opened the way for small mammals to proliferate, and ultimately modern humans. Today, it is humans who are the asteroid, condemning much of life to extinction.

The IPCC report reaffirms that the “vast majority” of known fuel reserves must be left in the ground to avert intolerable risks to future generations. Meanwhile the major energy corporations make no secret of their goal of exploiting these reserves and discovering new ones.

A day before its summary of the IPCC conclusions, The New York Times reported that huge Midwestern grain stocks are rotting so that the products of the North Dakota oil boom can be shipped by rail to Asia and Europe.

One of the most feared consequences of anthropogenic global warming is the thawing of permafrost regions. A study in Science magazine warns that “even slightly warmer temperatures [less than anticipated in coming years] could start melting permafrost, which in turn threatens to trigger the release of huge amounts of greenhouse gases trapped in ice,” with possible “fatal consequences” for the global climate.

Arundhati Roy suggests that the “most appropriate metaphor for the insanity of our times” is the Siachen Glacier, where Indian and Pakistani soldiers have killed each other on the highest battlefield in the world. The glacier is now melting and revealing “thousands of empty artillery shells, empty fuel drums, ice axes, old boots, tents and every other kind of waste that thousands of warring human beings generate” in meaningless conflict. And as the glaciers melt, India and Pakistan face indescribable disaster.

Sad species. Poor Owl.