Gradually, the masses are realising something is wrong
Maurice Newman 27 September 2016 The Australian
When your news and views come from a tightly controlled, left-wing media echo chamber, it may come as a bit of a shock to learn that in the July election almost 600,000 voters gave their first preference to Pauline Hanson’s One Nation party.
You may also be surprised to know that still deluded conservatives remain disenchanted with the media’s favourite Liberal, Malcolm Turnbull, for his epic fail as Prime Minister, especially when compared with the increasingly respected leader he deposed.
Perhaps when media outlets saturate us with “appropriate” thoughts and “acceptable” speech, and nonconformists are banished from television, radio and print, it’s easy to miss what is happening on the uneducated side of the tracks.
After all, members of the better educated and morally superior political class use a compliant media to shelter us from the dangerous, racist, homophobic, Islamophobic, sexist, welfare-reforming, climate-change denying bigots who inhabit the outer suburbs and countryside — the people whom Hillary Clinton calls “the deplorables”.
They must be vilified without debate, lest too many of us waver on the virtues of bigger governments, central planning, more bloated bureaucracies, higher taxes, unaffordable welfare, a “carbon-free” economy, more regulations, open borders, gender-free and values-free schools and same-sex marriage; the sort of agenda that finds favour at the UN.
Yet history is solid with evidence that this agenda will never deliver the promised human dignity, prosperity and liberty. Only free and open societies with small governments can do that.
Gradually, the masses are realising something is wrong. Their wealth and income growth is stagnating and their living standards are threatened. They see their taxes wasted on expensive, ill-conceived social programs. They live with migrants who refuse to integrate. They resent having government in their lives on everything from home renovations to recreational fishing, from penalty rates to free speech.
Thomas Jefferson’s warning that “the natural progress of things is for liberty to yield and government to gain ground” is now a stark reality.
The terms “people’s representative” and “public servant” have become a parody. In today’s world we are the servants and, if it suits, we are brushed aside with callous indifference.
Like the Labor government’s disregard for the enormous emotional and financial hurt suffered when, overnight, it shut down live cattle exports on the strength of a television show.
Or like the NSW parliament passing laws banning greyhound racing in the state. There was no remorse for the ruined lives of thousands of innocent people, many of whom won’t recover. Talk of compensation is a travesty.
Or like the victims neighbouring Williamtown and Oakey air force bases, made ill from toxic contamination of groundwater. Around the world it’s known chemical agents used in airport fire drills cause cancer, neurological disease and reproductive disorders, yet the Australian Department of Defence simply denies responsibility. The powerless are hopelessly trapped between health risks and valueless properties.
Similar disdain is shown for those living near coal-seam gas fields and wind turbines. The authorities know of the health and financial impacts but defend operators by bending rules and ignoring guidelines.
If governments believe the ends justify the means, people don’t matter.
When Ernst & Young research finds one in eight Australians can’t meet their electricity bills, rather than show compassion for the poor and the elderly, governments push ruthlessly ahead with inefficient and expensive renewable energy projects.
This newspaper’s former editor-in-chief Chris Mitchell reveals in his book, Making Headlines, how Kevin Rudd, when prime minister, brazenly attempted to use state power to investigate “the relationship between my paper and him”. Rudd’s successor, Julia Gillard, wanted to establish a media watchdog to effectively gag journalists.
None of this is fantasy and it explains why people are losing confidence in the democratic system. Australians feel increasingly marginalised and unrepresented. They are tired of spin and being lied to. They know that data is often withheld or manipulated.
As they struggle to make ends meet, they watch helplessly as the established political class shamelessly abuses its many privileges.
It appears its sole purpose in life is to rule, not to govern. This adds weight to the insightful contention by the Business Council of Australia’s Jennifer Westacott that Australia is in desperate need of a national purpose.
It’s no wonder, to paraphrase American author Don Fredrick, that a growing number of Australians no longer want a tune-up at the same old garage. They want a new engine installed by experts — and they are increasingly of the view that the current crop of state and federal mechanics lacks the skills and experience to do the job.
One Nation may not be the answer, but its garage does offer a new engine.
This is Australia’s version of the Trump phenomenon. Like Donald Trump, Hanson is a non-establishment political disrupter. However, unlike Trump, who may soon occupy the White House, Hanson won’t inhabit the Lodge.
This leaves Australia’s establishment and the central planners very much in control. It means we will remain firmly on our current bigger-government path, financed by higher taxes and creative accounting.
Nobel laureate economist FA Hayek observes in his book The Road to Serfdom: “The more planners improvise, the greater the disturbance to normal business. Everyone suffers. People feel rightly that ‘planners’ can’t get things done.”
But he argues that, ironically, in a crisis the risk is that rather than wind back the role of government, people automatically turn to someone strong who demands obedience and uses coercion to achieve objectives.
Australia is now on that road to tyranny and, with another global recession in prospect and nearly 50 per cent of voters already dependent on government, the incentive is to vote for more government, not less.
The left-wing media echo-chamber will be an enthusiastic cheerleader.
Original article here
It does make you wonder whether some journalists ever talk to ordinary Australians. Five minutes in any pub in the country will render such polling unnecessary.
By Chris Mitchell The Australian 26 September 2016
How to walk a mile in another’s shoes? That is the question great reporters seek to answer when they interview their subjects.
In a time when there has never been more media but it is light years wide and only atoms deep, there is little reward for doing what great newspapers seek to do: provide their readers with genuine understanding of issues and people’s views and motives.
This is a shouty, shallow and callow media age in which young Lefty tyros are rewarded for sharp opinions and violently executed tweets. Their opponents in the right-wing blogosphere too easily drift into hate and conspiracy over genuine inquiry.
So on a range of issues the Left and Right yell at each other in what psychologists refer to as “different emotional languages”, like a husband who really cannot understand what his wife is saying about why their marriage is going awry.
I got that feeling very strongly last Tuesday morning when I heard Andrew Bolt being interviewed by Fran Kelly about Tuesday night’s very interesting program with Linda Burney on Aboriginal recognition. Kelly was perplexed Bolt seemed not to agree with all the received Radio National wisdoms she was trying to get him to concede.
And yet the thinkers behind recognition, people such as Noel Pearson, have always known Andrew — with his ability to articulate the honestly held and genuine concerns of his readers — was the biggest danger to any potential referendum, even if it was first proposed by Andrew’s confidante Tony Abbott.
Just as with same-sex marriage and Muslim immigration the megaphones of the Left show no understanding of, or even empathy for, the great middle ground of Australian public opinion, which is where these issues will be decided.
Those in the maximalist camp on Recognition give every indication of preferring a loss to a win on slightly less ambitious terms. Wiser heads in the movement know proponents who argue for a treaty now would be smarter to take it one step at a time.
Still, I had real admiration for Bolt, who showed tremendous courage to expose himself to a full tilt ABC ideological crusade with newly elected federal Labor MP Burney. The Twittersphere was a feral sewer about him that night and next day.
Having been into the ABC’s Ultimo fortress in inner Sydney several times lately I can say the pursed-lipped tut-tutting is almost overpowering when a critic of the corporation crosses the threshold. Good on Bolt for doing it I reckon.
It was also gutsy of diminutive Burney to front a couple of conservative, and physical, giants in Bolt and Liberal Party federal MP Cory Bernardi in the latter’s Adelaide electoral office.
It is unlikely Bolt or Burney will ever persuade each other but viewers may have sensed an increased recognition on the part of each of the participants of the other’s genuine passion.
An Essential Media Poll published in The Guardian on Wednesday highlighted this sort of hyper partisanship and the inability of many in journalism even to understand how their own country feels about issues.
Given what has happened in Europe since German Chancellor Angela Merkel opened the nation’s borders to Syrian refugees a year ago it should have been no surprise to The Guardian or the ABC that half the nation wanted a ban on Muslim immigration.
The poll showed 49 per cent supporting a ban and only 40 per cent opposing. John Barron, hosting The Drum on ABC TV, seemed shocked that even large numbers of Greens and Labor voters supported such a ban.
It does make you wonder whether some journalists ever talk to ordinary Australians. Five minutes in any pub in the country will render such polling unnecessary.
The ideological and media divide is just as wide for same-sex marriage. The sheer brutality of the Left’s reaction to any Christian spokesperson either opposing change or supporting the plebiscite promised by the Coalition elected less than three months ago is vile.
This is not just a challenge for journalism. It is also a problem for the body politic.
If journalists don’t understand how their audiences feel and the media and politics become ever more sharply partisan, how will reformers ever bring about social, economic and political change?
This Balkanisation of social attitudes and the subsequent prioritising of opinion over reporting that seeks to explore and understand is making Western countries increasingly difficult to govern. Even something seemingly uncontestable such as repair of the federal budget now elicits sharply partisan divides among journalists and politicians.
I support recognition but would never think a referendum should even be held if a proposition was so ambitious it was guaranteed to fail.
A libertarian on same-sex marriage, I would nevertheless defend to the death the freedom of Christians, let alone Muslims and Jews, to stick to their religious convictions.
I think a ban on Muslim immigration would be the most dangerous thing the country could do if it really is interested in preventing young men from self-radicalising online.
After all, teenagers feeling so alienated from mainstream society today that they seek solace in the websites of Islamic State would only feel more like outsiders were all Muslim immigration banned. But it should sure as hell be obvious to any thinking journalist why in the face of so many attacks on Western targets during the past two years many Australians would be attracted to such a proposition.
If we try to walk a mile in another’s shoes, we might begin to see why Aboriginal kids would think it unfair to suggest they should just be happy to forget about their heritage and history and again accept what is being offered them. But we might also understand why Bolt believes people today should not be atoning to people many generations and multiple ethnicities away from the brutalities of white settlement.
We might understand the complexities of race from the position of the other person, as Stan Grant has so eloquently tried to explain.
Original article here
The Gayby Baby documentary’s screening at schools caused a stir.
WHAT parents have to realise is there is nothing new or unusual about the controversy surrounding the allegation that Cheltenham Girls’ High has banned gender specific terms such as girls and boys in favour of gender-neutral language.
A second example of adopting a lesbian, gay, bisexual, transgender and intersex (LGBTI) agenda is Newtown High School of the Performing Arts allowing students to wear either girls or boys uniforms regardless of gender. Add the furore surrounding the lesbian-inspired Gayby Baby film being shown in schools and the Safe Schools Coalition program and it’s clear that there is a concerted campaign by LGBTI advocates to force their radical agenda on schools.
And those enforcing a cultural left agenda on students, like La Trobe University’s Roz Ward, responsible for the Safe Schools program, make no secret of the ideology underpinning their long march through the education system. In a speech at the 2015 Marxism Conference, Ward argues, “LGBTI oppression and heteronormativity are woven into the fabric of capitalism” and “it will only be through a revitalised class struggle and revolutionary change that we can hope for the liberation of LGBTI people”.
In the same speech, titled The Role Of The Left For LGBTI Rights, Ward goes on to argue “Marxism offers both the hope and the strategy needed to create a world where human sexuality, gender and how we relate to our bodies can blossom in extraordinary new and amazing ways that we can only try to imagine today”.
Welcome to the world of gender theory. A world, as argued by the Gender Fairy story, where primary-school children can choose the gender they want to be as “only you know whether you are a boy or a girl. No one can tell you”.
A world where students are asked to sing: “You don’t have to be a certain way just because you have a penis, you don’t have to be a certain way just because you have a vagina”.
And it’s been happening for years. As detailed in my 2004 book Why Our Schools Are Failing, cultural-left academics, the Australian Education Union and the Australian Association for the Teachers of English are long-term advocates of the LGBTI agenda.
The 1995 AATE journal is dedicated to promoting a cultural-left view of gender and sexuality.
One paper calls on English teachers to explore “alternative versions of masculinity”, while another warns against “the various ways in which gender categories are tied to an oppressive binary structure for organising the social and cultural practices of adolescent boys and girls.”
The AEU’s 2001 policy argues that either/or categories like male and female are not natural or normal and that “all curriculum must be written in non-heterosexist language”.
The AEU’s policy goes on to argue that any discussion about LGBTI issues must “be positive in its approach” and “homosexuality and bisexuality need to normalised”.
Ignored is that according to one of the largest national surveys of Australians, about 98 per cent self-identify as heterosexual and babies, with the odd exception, are born with either male or female chromosomes.
Fast forward to the NSW’s Teachers Federation’s LGBTI policies and it’s clear little has changed. The Federation supports the Safe Schools program and anyone arguing for the primacy of male/female relationships is guilty of “heterosexism”.
Anyone committed to the belief there are two genders is guilty of promoting “fear and hatred of lesbians and gay men” and the belief “other types of sexualities or gender identities are unhealthy, unnatural and a threat to society”.
Ignored, compared to many other countries, including Saudi Arabia, Iran, Russia, India (where gay sex illegal) and African nations such as Nigeria, Uganda and Zimbabwe, is that Australia is a tolerant and open society. Football clubs have gay pride matches, many of our elite sports men and women have no problem ‘‘outing’’ themselves and the Gay/Lesbian mardi gras is widely accepted.
What LGBTI advocates have to accept is parents are their children’s primary teachers and caregivers and imposing a politically correct, radical LGBTI agenda on schools is more about indoctrination than education.
Dr Kevin Donnelly was co-chair of the National Curriculum Review and is a senior research fellow at the Australian Catholic University.
Original article here
Envy demands that there is always a winner and a loser
July 09, 2014 Tim Challies challies.com
I have written about envy before and have referred to it as “the lost sin.”
Envy is a sin I am prone to, though I feel like it is one of those sins I have battled hard against and, as I’ve battled, experienced a lot of God’s grace.
It is not nearly as prevalent in my life as it once was.
Recently, though, I felt it threatening to rear its ugly head again and spent a bit of time reflecting on it.
Here are three brief observations about envy.
ENVY IS COMPETITIVE
I am a competitive person and I believe it is this competitive streak that allows envy to make its presence felt in my life. Envy is a sin that makes me feel resentment or anger or sadness because another person has something or another person is something that I want for myself. Envy makes me aware that another person has some advantage, some good thing, that I want for myself. And there’s more: Envy makes me want that other person not to have it. This means that there are at least three evil components to envy: the deep discontent that comes when I see that another person has what I want; the desire to have it for myself; and the desire for it to be taken from him.
Do you see it? Envy always competes. Envy demands that there is always a winner and a loser. And envy almost always suggests that I, the envious person, am the loser.
ENVY ALWAYS WINS
Envy always wins, and if envy wins, I lose. Here’s the thing about envy: If I get that thing I want, I lose, because it will only generate pride and idolatry within me. I will win that competition I have created, and become proud of myself. Envy promises that if I only get that thing I want, I will finally be satisfied, I will finally be content. But that is a lie. If I get that thing, I will only grow proud. I lose.
On the other hand, if I do not get what I want, if I lose that competition, I am prone to sink into depression or despair. Envy promises that if I do not get that thing I want, my life is not worth living because I am a failure. Again, I lose.
In both cases, I lose and envy wins. Envy always wins, unless I put that sin to death.
Envy divides people who ought to be allies. Envy drives people apart who ought to be able to work closely together. Envy is clever in that it will cause me to compare myself to people who are a lot like me, not people who are unlike me. I am unlikely to envy the sports superstar or the famous musician because the distance between them and me is too great. Instead, I am likely to envy the pastor who is right down the street from me but who has a bigger congregation or nicer building; I am likely to envy the writer whose books or blog are more popular than mine. Where I should be able to work with these people based on similar interests and similar desires, envy will instead drive me away from them. Envy will make them my competitors and my enemies rather than my allies and co-laborers.
What’s the cure for envy? I can’t say it better than Charles Spurgeon: “The cure for envy lies in living under a constant sense of the divine presence, worshiping God and communing with Him all the day long, however long the day may seem. True religion lifts the soul into a higher region, where the judgment becomes more clear and the desires are more elevated. The more of heaven there is in our lives, the less of earth we shall covet. The fear of God casts out envy of men.”
Original article here
Arthur C. Brooks
BACK in 1993, the misanthropic art critic Robert Hughes published a grumpy, entertaining book called “Culture of Complaint,” in which he predicted that America was doomed to become increasingly an “infantilized culture” of victimhood. It was a rant against what he saw as a grievance industry appearing all across the political spectrum.
I enjoyed the book, but as a lifelong optimist about America, was unpersuaded by Mr. Hughes’s argument. I dismissed it as just another apocalyptic prediction about our culture.
Unfortunately, the intervening two decades have made Mr. Hughes look prophetic and me look naïve.
“Victimhood culture” has now been identified as a widening phenomenon by mainstream sociologists. And it is impossible to miss the obvious examples all around us. We can laugh off some of them, for example, the argument that the design of a Starbucks cup is evidence of a secularist war on Christmas. Others, however, are more ominous.
On campuses, activists interpret ordinary interactions as “microaggressions” and set up “safe spaces” to protect students from certain forms of speech. And presidential candidates on both the left and the right routinely motivate supporters by declaring that they are under attack by immigrants or wealthy people.
So who cares if we are becoming a culture of victimhood? We all should.
To begin with, victimhood makes it more and more difficult for us to resolve political and social conflicts. The culture feeds a mentality that crowds out a necessary give and take — the very concept of good-faith disagreement — turning every policy difference into a pitched battle between good (us) and evil (them).
Consider a 2014 study in the Proceedings of the National Academy of Sciences, which examined why opposing groups, including Democrats and Republicans, found compromise so difficult. The researchers concluded that there was a widespread political “motive attribution asymmetry,” in which both sides attributed their own group’s aggressive behavior to love, but the opposite side’s to hatred. Today, millions of Americans believe that their side is basically benevolent while the other side is evil and out to get them.
Second, victimhood culture makes for worse citizens — people who are less helpful, more entitled, and more selfish. In 2010, four social psychologists from Stanford University published an article titled “Victim Entitlement to Behave Selfishly” in the Journal of Personality and Social Psychology. The researchers randomly assigned 104 human subjects to two groups.
Members of one group were prompted to write a short essay about a time when they felt bored; the other to write about “a time when your life seemed unfair. Perhaps you felt wronged or slighted by someone.” After writing the essay, the participants were interviewed and asked if they wanted to help the scholars in a simple, easy task.
The results were stark. Those who wrote the essays about being wronged were 26 percent less likely to help the researchers, and were rated by the researchers as feeling 13 percent more entitled.
In a separate experiment, the researchers found that members of the unfairness group were 11 percent more likely to express selfish attitudes. In a comical and telling aside, the researchers noted that the victims were more likely than the non-victims to leave trash behind on the desks and to steal the experimenters’ pens.
Does this mean that we should reject all claims that people are victims? Of course not. Some people are indeed victims in America — of crime, discrimination or deprivation. They deserve our empathy and require justice.
The problem is that the line is fuzzy between fighting for victimized people and promoting a victimhood culture. Where does the former stop and the latter start? I offer two signposts for your consideration.
First, look at the role of free speech in the debate. Victims and their advocates always rely on free speech and open dialogue to articulate unpopular truths. They rely on free speech to assert their right to speak.
Victimhood culture, by contrast, generally seeks to restrict expression in order to protect the sensibilities of its advocates. Victimhood claims the right to say who is and is not allowed to speak.
What about speech that endangers others? Fair-minded people can discriminate between expression that puts people at risk and that which merely rubs some the wrong way. Speaking up for the powerless is often “offensive” to conventional ears.
Second, look at a movement’s leadership. The fight for victims is led by aspirational leaders who challenge us to cultivate higher values. They insist that everyone is capable of — and has a right to — earned success. They articulate visions of human dignity.
But the organizations and people who ascend in a victimhood culture are very different. Some set themselves up as saviors; others focus on a common enemy. In all cases, they treat people less as individuals and more as aggrieved masses.
Robert Hughes turned out to be pretty accurate in his vision, I’m afraid. It is still in our hands to prove him wrong, however, and cultivate a nation of strong individuals motivated by hope and opportunity, not one dominated by victimhood. But we have a long way to go. Until then, I suggest keeping a close eye on your pen.
Arthur C. Brooks is the president of the American Enterprise Institute and a contributing opinion writer.
Original article here
MORE: Dr John Townsend – The Entitlement Cure link here
by Dr Augusto Zimmermann
News Weekly, August 1, 2015
King John’s grant of Magna Carta in 1215 is a wonderful example of the central role religion played in the development of the common law. The following article is an edited version of a paper presented by Dr Augusto Zimmermann at the Parliament of Tasmania on the occasion of its commemoration of the 800th anniversary of Magna Carta on June 16, 2015.
The Great Charter represents a revolutionary advancement in the law in that the provisions found in the charter and its many subsequent revisions, were predominantly concerned with recognising and endowing political and juridical rights. More importantly, the effect of the charter was a concession from the king that he, too, could be bound by the law, thus establishing a clear formal recognition of the rule of law.
Until Magna Carta, customary law had defined the legal rights of English subjects. In the absence of statute law, disregarding custom, the king was vested with the authority to administer the law as he saw fit. Accordingly King John ruled arbitrarily after inheriting the throne from King Richard in 1199, endeavouring to liberate himself from restraints of the law and powerful ministers so as to govern the realm at his sole pleasure.
Still, the monarch’s ability to rule arbitrarily was soon called into question, especially when a number of failed military conflicts abroad (namely, losses to the French), combined with constant increases in taxes to fuel such conflicts, provoked a great deal of discontent among his subjects (most notably, the nobles and barons).
The 12th century was marked by an outburst of literature, art and culture in England, which the development of Christian ideals of law and government accompanied. The influential Archbishop of Canterbury, Hubert Walter (1160–1205), espoused the view that the royal power was inseparable from the law.
Legal historian Theodore Plucknett wrote: “[His] prestige was so great that a word from him on the interpretation of the law could set aside the opinion of the king and his advisers. King John, in fact, felt with much truth that he was not his own master so long as his great minister was alive.”
A 19th-century etching of the Seal of King John affixed to the Magna Carta, from the National Portrait Gallery in London.
Growing discontent with King John heightened after a dispute with Pope Innocent III over the appointment to the See of Canterbury. In 1205 two candidates disputed the election of the See. Pope Innocent III rejected both contenders and appointed his own candidate, Stephen Langton. John regarded his bishops as no more than higher civil servants and desired the English Church to be subservient to the Crown. Langton, however, assumed the separate spheres of Church and state, thus attacking the king’s conduct and declaring that his subjects were not bound to him if he had broken faith with the “King of kings”.
The Great Interdict followed, to which the King replied by confiscating Church property. This led Rome to submit King John to severe punishments, especially excommunication in 1209. The king eventually succumbed to the Pope’s demands and was forced to resign the Crowns of England and Ireland, receiving them again as the Pope’s feudatory. In 1213, under the threat of French invasion by Phillip Augustus, King John finally accepted Langton’s appointment and swore to subject his kingdom to the lordship of Innocent III. These sources of discontent eventually led the English barons to march into London in 1215. They forced King John to sign the articles of demand encompassed in Magna Carta. By that time Langton had become the main figure in the struggle of the barons against King John.
Stephen Langton’s original intent
Historians generally agree that Stephen Langton was the principal drafter of the original document. When Pope Innocent III appointed him in 1206, he had made an unusual choice since Langton had spent over 30 years outside England in the schools of Paris. This fact alone, indeed, was a good reason for King John’s complaint that the chosen candidate had lived too long among his arch enemies in France. Before becoming pontiff, Pope Innocent III – who deeply admired the learned Langton – had been a student of his at Paris.
When Langton arrived in England in July 1213 and met King John on July 20 at Winchester, he immediately absolved the king from excommunication on the condition that the laws of his ancestors were fully restored, particularly the laws of Edward the Confessor (c.1003–66) that required the monarch to rule justly.
This included an utterance made in 1140, which, based on the laws of Edward the Confessor, stated: “The king ought to do everything in the realm and by judgement of the great men of the realm. For right and judge ought to rule in the realm, rather than perverse will. Law is always what does right; will and violence and force are indeed not right. The king, indeed, ought to fear and love God above everything and preserve his commands throughout his realm.”
Archbishop Langton shared the view of his predecessor, Hubert Walter, that “loyalty was devotion, not to a man, but to a system of law and order which he believed to be a reflection of the law and order of the universe”. From Romans 13 Langton concluded that royal power derived from God and that such power was always limited by the rule of law. He stated: “If someone abuses the power that is given to him by God and if I know that this bad use would constitute a mortal sin for me, I ought not to obey him, lest I resist the ordinance of God.”
Elsewhere Langton stated that “when a king errs, the people should resist him as far as they can; if they do not, they sin”.
Additionally, he commented that “if someone has been condemned without a judicial sentence, the people are allowed to free the victim”.
It was Langton, therefore, who drafted the Great Charter as a way of resolving the baronial grievances. His biblical studies at Paris anticipated the direct challenges of Magna Carta to the royal power, which manifestly asserted the superiority of the written law over political arbitrariness. In Chapter 18 of Deuteronomy the Holy Bible seemed for him to convey the principle that the law of the land should be reduced to writing for the instruction of the civil ruler.
Since the idea of written law had played a fundamental role in the formation of the Hebrew nation, Langton concluded that a similar function should be applied to the grievances levied against King John. These grievances should be expressed in writing and the king compelled to affix his royal seal to the written law.
Magna Carta was therefore primarily the work of Archbishop Langton, who sincerely hoped through this written document to realise an Old Testament covenantal kingship in England. His concerns for freedom and due process were made explicit in several provisions of Magna Carta, especially Clause 39 (“No freeman shall be taken or imprisoned or disseised [dispossessed] or outlawed or exiled or in any way ruined … except by the lawful judgement of his peers or by the law of the land”), Clause 40 (“To no one will we sell, to no one will we deny or delay right or justice”), and Clause 52 (“If anyone has been disseised or deprived by us without lawful judgement or his peers of lands, castles, liberties, or his rights, we will restore them to him at once”).
Langton’s biblical studies at Paris deeply shaped those important provisions. Because of this, Magna Carta can be read not just as a historical, constitutional or legal document but also as a religious document. Langton had in his Parisian exile been among the most famous lecturers on teachings of the Old Testament. He strongly believed that the law written down in Deuteronomy prevented the monarch from going beyond the power explicitly authorised to him.
He had studied Saul’s acclamation as king over Israel by the prophet Samuel, who “declared to the people the law of the kingdom and wrote it in a book and deposited it in the presence of the Lord” (1 Samuel 10:25). As such, Langton expected that a written law should become an “English Deuteronomy” that would work in the form of a covenant between God, king and people, thus ensuring that common-law polities had at their heart a covenantal foundation in which the king would be constitutionally accountable to a higher authority.
Archbishop Langton was a learned theologian and his massive commentaries on the Bible contain thousands of pages of explanation about the meaning of scriptural words and phrases. He applied his knowledge of biblical hermeneutics to draw modern parallels between England and the Old Testament stories of good kings and bad kings who abused their powers by violating God’s laws.
The good kings of Scripture, Langton argued, had been wise to acquaint themselves with the legal rules of Deuteronomy, a book of laws that Moses wrote in the form of a treaty (or social contract) between the king and his subjects, calling the nation of Israel to faithfully uphold God’s laws. By contrast, the bad rulers were those who sought to evade both the advice of their priests and the obligation to rule according to the law. Thus Langton concluded, among other things, that “necessity”, or absolute need, was the primary reason for taxation, although he complained that contemporary “rulers taxed for trivial reasons, from mere vanity or pride”.
As Nicolas Vincent points out: “Those who attended Langton’s lectures would have heard him contrast the priesthood recruited by Moses with modern bishops ‘recruited from the Exchequer in London’. Those who read his commentary on the book of Chronicles would have found him railing ‘against princes who flee from lengthy sermons’, surely a reference to King John’s attempts to escape the sermonising of St Hugh of Lincoln.
“Kingship itself, Langton argued, had been decreed by God not as a reward but as a punishment to mankind. As the Old Testament of Hosea (13:11) proclaims: ‘I have given you a king in my wrath.’ ”
Archbishop Langton wholeheartedly embraced the scriptural thesis that civil government is not God’s original plan for humankind but rather a result of original sin. The first reference to civil government in Scripture is located in Genesis, chapter 9, where God is reported to command capital punishment for anyone who takes innocent life since humans are created in the image of God.
Kingship a ‘necessary evil’
Yet the state is regarded as not being envisaged in God’s original plan for humankind. Rather, the state is deemed a “necessary evil” since it is conceived only after sin has entered in the world, when it becomes therefore necessary to establish a civil authority to curb the violence ushered in by the Fall (Genesis 6:11-13). At the beginning of God’s creation, however, the biblical account reports that man and woman lived in close fellowship with their Creator, under his direct law and sole authority.
According to Baldwin, this biblical worldview led Archbishop Langton to conclude: “There was no government in the Garden of Eden before the Fall, and there will be none at the end of the world. Just as God allowed divorce because of human frailty, so he has permitted the existence of rulers only to curb the original sin that resulted from the Fall. When Yahweh in the Old Testament narrative (1 Samuel 8 and 9) agreed to the children of Israel choosing Saul as their king, therefore, he allowed it only with severe reservations and misgivings. …
“Langton argued that the law not only stated the peoples’ obligations to the king, but also what the king could exact from the people; for that reason the law was written down to prevent the king from demanding more.
“Most specifically, the law was the book of Deuteronomy, truly the sent written law of the children of Israel. Chapter 17 prescribed the duties of the king.”
Religious significance of Magna Carta
Magna Carta signaled a remarkable advancement in English law. King John, acting on the advice of two archbishops and nine bishops, sealed Magna Carta “from reverence for God and for salvation of our soul and of all our ancestors and heirs, for the honour of God and the exaltation of Holy Church and the reform of our realm”. Furthermore, the barons justified their actions as legally permissible under God and the Church. In so doing, Archbishop Langton and Robert Fitzwalter led them, with Fitzwalter declaring himself the “Marshal of the army of God and Holy Church”.
From 1225, subsequent versions of the Charter “were reinforced by sentences of excommunication against infringers”. Although this seems a strange form of punishment to our modern minds, it was for the breaking of their oaths that King Stephen after 1135 was stigmatised as a tyrant and usurper. Oath-taking was taken seriously and, in an age without effective judicial sanctions, “the consequences of oath-breaking could prove disastrous for individuals as for nations”.
J.C. Holt commented on the efficacy of ecclesiastical penalties for breaches of the charter: “Reinforce the charters by the threat of excommunication; promulgate the penalty in the most solemn assemblies of king, bishops, and nobles, as in 1237 and 1253; reinforce the threat by papal confirmation, as in 1245 and 1256, have both charters and sentence published in Latin, French, and English as in 1253, or read twice a year in cathedral churches as in 1297; display the Charter of Liberties in church, renewing it annually at Easter, as Archbishop Pecham laid down in 1279; embrace the king himself within the sentence of excommunication, as Archbishop Boniface did by implication in 1234.
“To modern eyes it is all repetitive and futile. In reality it was a prolonged attempt to bring the enforcement of the charter within the range of canon law, to attach the ecclesiastical penalties for breach of faith to infringements of promises made ‘for reverence for God’, as the charter put it, promises repeatedly reinforced by the most solemn oaths to observe and execute the charter’s terms. This was perhaps the best the 13th century could do to introduce some countervailing force to royal authority.”
Magna Carta can be historically described as a medieval treaty between the English king and his barons, concerning such matters as the custody of London and, in the Letters of Testimonial signed by the Archbishop and the bishops, a “charter of liberty of Holy Church and of the liberal and free customs” that the monarch had conceded. The primary intent of the original draft was to bring about an end to a state of civil war through signing a document that declared the liberties that it itself conveyed.
So customs were not predominant, but rather keeping the peace and liberties of the realm. Indeed, throughout Magna Carta customs are subsidiary to liberties since they are conveyed as liberties in relation to practices that were commonly described as “consuetudines”. Above all, the Great Charter was granted not only “for the honour of God and the exaltation of Holy Church”, and out of “reverence of God and for the salvation of the [king’s] soul and those of all [his] ancestors and heirs”, but also, and particularly significant, for “the reform of our realm”.
Dr Augusto Zimmermann is Law Reform Commissioner, Law Reform Commission of Western Australia, Senior Lecturer in Legal Theory and Constitutional Law, Murdoch Law School, former Associate Dean (Research) and Director of Postgraduate Studies at Murdoch, President of the Western Australian Legal Theory Association, and Professor of Law (Adjunct) at Notre Dame University, Sydney.
original article here
The Hobbit protagonist Bilbo Baggins. JRR Tolkien’s Shire prospered despite having hardly any government. Photograph: Allstar/New line Cinema/SportsPhoto
Those who believe in a small state and self-regulated markets could claim JRR Tolkien and Elinor Ostrom as two of their own.
This year’s Christmas blockbuster looks likely to be the first part of the Hobbit trilogy, which is released this week. JRR Tolkien‘s prequel to the Lord of the Rings follows the story of Bilbo Baggins, plucked from his comfortable life in the Shire to accompany treasure-seeking dwarves on an adventure.
Tolkien became a cult figure among hippies in the 1960s, for whom LOTR worked on a number of levels: peace-lovers versus warmongers; military-industrial complex versus local smallholders; the lust for power versus individual freedom. These days he would have celebrated the victory of the people of Totnes in their campaign to keep a branch of Costa out of their town.
Yet those who believe in a small state and self-regulated markets could also claim Tolkien as one of their own. The Shire had hardly any government: families, for the most part, managed their own affairs and the only real official was the mayor, who oversaw the postal service and the watch.
Hobbits enjoyed a pipe and a mug of ale: it is unlikely Tolkien would have been a fan of smoking bans and minimum unit prices for alcohol. Like Elinor Ostrom, he might even have been invited to deliver the Hayek lecture at the UK’s bastion of free-market thinking, the Institute of Economic Affairs (IEA).
Ostrom, who died this year, became the first woman to win the Nobel prize for economics in 2009. Her work on the governance of common-pool resources, such as forests and fisheries, was based on studying communities to see what worked rather than on highly complex models. She concluded there was no “one right” way to do things but, in the main, the best solutions were where communities developed their own approach to managing common resources.
This was a message that went down well with those on the left for whom what has become known as the “tragedy of the commons” in the developing world is the result of privatisation, which has allowed companies to deplete resources in pursuit of short-term profitability.
But Ostrom was no great fan of big government either. She considered the EU’s common fisheries policy an unmitigated disaster, viewing with horror the attempt to have one set of rules from the Mediterranean to the Baltic. This went down rather well with economic liberals, and helps explain why the IEA is publishing a monograph based on Ostrom’s Hayek lecture. This explores whether there is a way of managing the commons that avoids the perils of market failure and of government regulation. Interestingly, it contains commentaries both from free-marketeers such as Mark Pennington of King’s College London and from Christina Chang of the Catholic aid agency Cafod.
Ostrom would have been pleased by this rare meeting of minds across the political spectrum. She talked about the “panacea problem” – policymakers’ belief that there was a “best way” of doing things. “For many purposes, if the market was not the best way, people used to think that the government was the best way. We need to get away from thinking about very broad terms that do not give us the specific detail that is needed to really know what we are talking about,” she said.
Governance systems that worked in practice were not those that stemmed from a theory of what ought to work but had, on the contrary, evolved from local conditions. “There is a huge diversity out there, and the range of governance systems that work reflects that diversity. We have found that government-, private- and community-based mechanisms all work in some settings.”
While careful not to fall into the trap of saying community-based systems always work best, Ostrom says there are many examples of local solutions that have husbanded resources carefully and avoided ecological damage. Her work suggests community-based approaches work best when there are clear boundaries to the resource – the pasture contained in a Swiss Alpine valley, for example – and where the local people draw up rules they deem appropriate, police them, and have an accepted mechanism for settling disputes and punishing those who transgress.
Self-organisation tends to work if there is a high level of trust and if the communities are allowed to develop their own rules. They then tend to be concerned about ensuring that the resource still exists for future generations, thus avoiding over-exploitation. Local monitoring is crucial, Ostrom she suggests. “The local people pay attention to what is happening in the forest if they have some rights to collect.”
It is easy to see why Ostrom appeals to those on the left who believe in localism and collective solutions to problems that are not administered by the state. What is less obvious is why a body like the IEA should be excited by her work. The answer, according to the thinktank’s editorial director, Philip Booth, is that “in no sense do Professor Ostrom’s ideas conflict with the idea of a free economy”.
“To the left, perhaps, the community management of a resource is the acceptable face of a free economy like a mutual bank or co-operative retail outlet, but it is no less free for that,” he adds. If there is a congruence of thinking between left and right exemplified by Ostrom, it is in the concern about “bigness” in all its forms. Her message is that policies that go with the grain of local communities tend to work, while those that rely on the restraint of multinational corporations or the wisdom of officials tend not to.
In the question-and-answer session at the end of her lecture, Ostrom was asked whether it was possible to adapt an approach that worked for the management of fisheries and forests to tackling climate change. While the commonly held view is that global warming can only be handled effectively by governments, Ostrom said this was a mistake and it was important to encourage action at the local level. She welcomed the fact that 1,000 mayors in US cities had signed an agreement to start working on ways to reduce greenhouse gas emissions, adding: “I am very nervous about just sitting around and waiting and making the argument that the rest of us can’t do anything at all.”
Let’s be clear. For the most part, the world is not run along the lines suggested by Ostrom. It is overfished, increasingly deforested, ravaged by those who care nothing about resource management and local communities, dominated by dogmatists who think they know best. But there’s something heartening about an economist who doesn’t claim to have all the answers and who suggests there is a different way of doing things.
The Future of the Commons; Elinor Ostrom et al; The Institute of Economic Affairs; iea.org.uk
Original article here
Am I responsible for ensuring that certain values outlast and outlive me?
From the July/August 2012 issue:
Selfishness as Virtue Benjamin E. Schwartz
Going Solo: The Extraordinary Rise and Surprising Appeal of Living Alone
by Eric Klinenberg
Penguin Press, 2012, 288 pp., $27.95
The health of American society is a perennial favorite topic for pundits, intellectuals, professors and politicians, as well it should be. The Founders understood the fragility of a free society and would take comfort knowing that our chattering classes keep watch over it. “A republic, if you can keep it”, warned Benjamin Franklin. Yet the gaze of today’s watchmen too often strays toward the meretricious. As ever, some confuse cause and effect. In these sped-up times, many fixate on the urgent while ignoring the essential. A great many, especially in the social science field, seem mesmerized by metrics, reminding us of Nietzsche’s famous remark that “were it not for the constant counterfeiting of the world by means of numbers, men could not live.” Some see numbers nearly everywhere: GDP, GNP, Gini coefficients, median income, unemployment and demographic data, the fluctuations of the Dow and the S&P indices, and much more besides.
It is widely assumed that we know a lot more about our social circumstances thanks to all these numbers than we did before they were crunched. That is not entirely obvious. With increasing momentum ever since the establishment of the Bureau of Labor Statistics in 1913, for example, we have become progressively obsessed with economic data to the point of neurosis. Our 19th-century forebears did not lose sleep or get caught up in herd-like trading behavior upon learning that growth in the third quarter of 1873 was much lower than expected, because they never expected anything in particular in the first place.
More generally, one suspects that method and data have too often displaced the search for wisdom and the skills to apply it, hiding ideology and more subtle forms of bias along the way. Perhaps things were clearer when we spoke of political economy and political philosophy, before we hived off social sciences under the name of economics and political science. Perhaps the separation of moral sensibilities rooted in religion from intellectual endeavors is not so enlightened after all. A case in point may be developed through an appreciation of sociologist Eric Klinenberg’s new book Going Solo: The Extraordinary Rise and Surprising Appeal of Living Alone.
Going Solo bases itself on relatively new data showing that more than 50 percent of American adults are single, and 31 million—roughly one out of every seven adults—live alone. This is a significant increase from 1950, when only 22 percent of American adults were single. It corresponds with an increase in the average age of marriage by five years to 28 for men and 26 for women. Put another way, people who live alone make up 28 percent of all U.S. households, which makes them more numerous than any other domestic unit, including the nuclear family. These are the highest numbers ever recorded since recorders of such things started, well, recording them.
This is very significant data, and here admittedly is a case where numbers reveal a situation that would otherwise not be obvious even to attentive people upon mere observation. Klinenberg does a commendable job of presenting it in a comprehensive yet lucid fashion.
Klinenberg, a professor at New York University and editor of the journal Public Culture, would not be much of a sociologist if he just left it at that, however. And he doesn’t. As his subtitle suggests, he likes what the data tell us; his position could be summed up by the subtitle of a book he commends: How Singles Are Stereotyped, Stigmatized, and Ignored, and Still Live Happily Ever After. Klinenberg is rarely explicit about his convictions, which saves him the trouble of seriously assaying their implications, but he finally gets to the point directly in his conclusion, asserting that “living alone is an individual choice that’s as valid as the choice to get married or live with a domestic partner. . . . [I]t’s a collective achievement—which is why it’s common in developed nations but not in poor ones.” Klinenberg cites Sweden as a model to be emulated.
This is a novel position, to be sure, considering that no known civilization in human history has lauded solitary living as a social ideal. Either the extended family or, since the Industrial Revolution, the nuclear family variant of it, has been a universal social norm for at least the past 10,000 years and arguably much longer than that. And you don’t need data to see why: Society needs children and children need families.
That, however, is actually the least of it. What the Founders knew, but so many contemporaries seem to have forgotten, is that the well-being of any society turns not just on its capacity to procreate but on its ability to transmit a tradition of moral reasoning, and the values that attend it, to future generations. Drawing from the Hebrew prophets and the Greek philosophers, they recognized that values are in flux as virtuous or venal cycles reverberate across generations. Not that moral development is to be feared, or that change is in principle to be disparaged, but development and change has to be carefully nurtured by sentries on the lookout for indulgence, corrosion and selfishness.
The Founders understood that the good life can only be safeguarded by a good society, and that this indelible connection bestows obligations on individuals to invest in the acculturation of future generations.
To this ancient wisdom, the contemporary social and natural sciences have added powerful evidence over the past quarter century. We are social animals and context is critical in all we do as individuals and as members of groups. Yet Klinenberg somehow manages to ignore the intergenerational ramifications of “going solo.” His selection of supporting evidence is revealing: “Compared with their married counterparts”, he writes, those who live alone are more likely to eat out and exercise, go to art and music classes, attend public events and lectures, and volunteer. There’s even evidence that people who live alone . . . have more environmentally sustainable lifestyles than families, since they favor urban apartments over large suburban homes.
It’s telling that the activities Klinenberg mentions are put forward self-evidently as barometers of the good life. He and his research assistants interviewed approximately 300 people who live alone, with the majority of research taking place in four boroughs of New York City, “whose diversity”, according to the author, “allowed for a heterogeneous sample within the parameters of a great city.” One of them was a divorced man named Steve, whose revelation is related as follows:
The problem, he realized, was that living with someone—even a woman he loved—meant denying himself the chance to enjoy an unfettered existence: Dating new women. Staying out as long as he wanted and not worrying about anyone else. Watching sports. Seeing movies. Meeting friends . . . Steven had grown to appreciate the virtues of living lightly, without obligation.
Klinenberg’s use here of the word virtue is especially jarring. He is using the dictionary definition (“a good or useful quality of a thing”), but even a source as intellectually thin as Wikipedia understands that, “Virtue is moral excellence. A virtue is a positive trait or quality subjectively deemed to be morally excellent and thus is valued as a foundation of principle and good moral being.” Clearly, virtue requires moral reasoning, so how is it possible to conceive of “virtue” “without obligation”?
Klinenberg’s answer, citing the demographer Andrew Cherlin, is that “one’s primary obligation is [now] to oneself rather than to one’s partner and children.” The basis for this assertion is found in Klinenberg’s introduction, in which he invokes German sociologist Ulrich Beck and Elisabeth Beck-Gernsheim, who claim that, “For the first time in history the individual is becoming the basic unit of social reproduction.” For Klinenberg, procreation and family have been separated such that living alone and being a father or mother are no longer in necessary conflict. This is a foundational proposition and the keystone to the conceptual edifice Klinenberg constructs.
It’s also utter nonsense. Individuals don’t transfer values from one generation to the next. Individuals are biologically incapable of producing a next generation except in the crudest possible sense of the term. Socialization—the process through which a person internalizes what is good and bad, meaningful and meaningless—is shaped by one’s relatives, the friends and associates who surround a person, and typically a canon of texts that is revered and consulted for guidance. The values of expressive individualism guarantee that the values of future generations will be more or less up for grabs for the simple reason that expressive individualists have a difficult time replicating (the demographic data don’t lie) and an even more difficult time socializing a child.
It’s true that expressive individualists do connect with one another for varying periods of time and do at least fairly often have children. But the deliberately atomistic quality of their value system makes it difficult for these children to understand, let alone continue, whatever moral traditions their parents may affirm and display. In this respect, today’s expressive individualists bear some comparison to a 19th-century millennial sect called the Harmony Society. Founded in Germany in 1785, the Harmonists were a Protestant community that flourished in Indiana between 1825 and 1850. At the time, its members were known for their social conscience and economic success. Yet these virtues weren’t enough to ensure the sect’s survival for one simple reason: It promoted celibacy. The Harmonists, like today’s expressive individualists, were ethical, hardworking, productive people, but their way of life proved unsustainable because their values failed to foster successor generations.
It is no coincidence, then, that children are almost entirely absent from Going Solo. Or that Klineberg suggests that Americans emulate the Swedes, whose fertility rate stands at 1.67 children born per woman, significantly below replacement. He never addresses the topic of childrearing, as opposed to modern techniques of childbearing. Ironically, the closest he gets to the subject is in the introduction, where the reader learns that Henry David Thoreau’s mother frequently came to deliver home-cooked meals to her son as he was experiencing supposedly self-sufficient solitude at Walden Pond.
It’s striking when Klinenberg’s rare invocations of children do occur. To support his thesis that solo living is on the inevitable rise, Klinenberg notes the prevalence of children growing up in single bedrooms. He cites a report by the William Gladden Foundation that asserts, “More children today have less adult supervision than ever before in American history . . . and many begin their self-care at about age eight.” Klinenberg portrays this as a positive trend that is catalyzing self-sufficiency. One wonders how he would apply this analytical approach to a recent study showing that the share of children born to unmarried women has crossed a threshold: More than half of births to American women under thirty occur outside marriage, and many of these children will be raised by single mothers. Such an upbringing may foster independence, yes; but researchers have consistently found that children born outside marriage face elevated risks of falling into poverty, failing in school and suffering severe emotional and behavioral problems.1 Klinenberg doesn’t mention any of this research.
By far the most frequent place children appear in Going Solo is in testimonials by divorced parents who express gratitude for having them and sorrow for not seeing them more often. Interestingly, while Klinenberg tends to praise the decision to delay parenthood in the name of self-actualization, several of his interviews reflect on the challenge of significantly altering one’s personal habits later in life and abandoning the ways of a “singleton.” “You become a lot pickier when you’re older”, explains one interviewee. Another relates, “Women can be suspicious of never-married men in their late thirties and forties.” Patterns and preferences harden over time, but not to fear. “However enriching it may be”, advises Klinenberg, “becoming a single parent is also the most challenging way to attract domestic companionship.” He then immediately adds: “There is another, more popular alternative for people who want to live alone but need someone to care for or something to help stave off loneliness: getting a pet.”
In other words, the role of children Klinenberg appreciates most has nothing much to do with social reproduction or the preservation of a society’s moral compass, but rather with companionship. This conceptualization is reflected in the parenting style of baby boomers and members of Generation X who aim to be “friends” with their children. Inadvertently, one would presume, Klinenberg has created a functional equivalence between a child and a Chihuahua.
Such assumptions are no longer surprising. To the extent that America’s elite shares a common value, it is expressive individualism, the idea that one’s greatest priority ought to be self-expression, self-cultivation and self-fulfillment. Klinenberg is probably right to suggest that very few people these days wince at the manifestations of expressive individualism or are prepared to buck the tide in the lives of their own families. Who is willing to discourage a child from investing in a second or a third academic degree (even if this defers starting a family) or taking a job at a top law firm (even if the job leaves little time for family life), or traveling the world (even if the instability doesn’t allow for child rearing)? Who is anyone, even a parent, to discourage a child’s dreams of self-fulfillment?
Klinenberg notes that when he was a graduate student at Berkeley he knew several students in their late twenties whose adviser, a pioneering female scholar, actively discouraged them from entering relationships before they had made their mark. “Don’t you owe it to yourself?” she would ask. And yet the proximate results of this type of expressive individualism are not the ultimate consequences. The effects are felt across generations.
That is a problem, but not one that bothers Klinenberg. In the cacophony of appeals to save the planet, create jobs, reduce the national debt, and end world poverty, it’s rare to hear anyone champion the value of social reproduction. But the intergenerational transfer of cultural capital doesn’t just happen automatically. It requires time, money, space and lots of institutional support. It also requires prioritization and encouragement. While America’s columnists, talking heads and progressive intellectuals are consumed with economic growth, technological development, individual opportunity and social safety nets, few question how well America is developing the character of the next generation.
It’s no accident that so many Americans have embraced expressive individualism or that American commentators avoid discussing how well we are transferring values from one generation to the next. After all, America is the land of the free, and that freedom grew in part out of a protest against that which came before (the medieval Catholic Church, the British Crown, the ways of the “old world”). The very act of journeying from somewhere else to the New World or from established colonies to the American frontier was an act of departure even if that journey allowed for continuity in a different place. A country born of immigrants is cautious in how forcefully it speaks of the present generation’s debt to the past or its responsibility to the future. But the Founders also greatly valued organic community, understanding that the chief distinguishing feature of a free society is that it maintains order through the self-regulation of citizens living together rather than by dint of the authorities of state, the internalization of civic values being the central bulwark against the deformation of liberty into license and chaos.
Nonetheless, American individualism seems to have been fed a rich diet in recent decades. That diet has consisted of both the general infusion of market-fundamentalist metaphors in our social and intellectual life and by a range of technological innovations. Both phenomena threaten to deplete stock of social capital.2 Individualism has come to mean no limits on our freedom of maneuver, no obligations arising from a shared history, community and culture. As a matter of objective and, yes, quantitatively measurable reality, we are indeed “going solo”, and most Americans seem to be fine with that—as the generally positive reception of Klinenberg’s book seems to reflect.
The recognition that we are who we are because of our elders raises uncomfortable questions about our responsibility to future generations. If someone in my past forsook instant gratification to allow me to become who I am, does this obligate me to do the same? Am I responsible for ensuring that certain values outlast and outlive me? America’s strength is a function of many factors, but certainly one of them is that for generations citizens answered these questions affirmatively. The popularity of “going solo”, which Klinenberg’s data strongly affirms, doesn’t necessarily mean that Americans are answering “no” to these questions. It’s worse than that: As more of us spend more of our lives alone, we’re less likely to even confront them. By default, we are now allowed the novel conceit that selfishness is a virtue.
1See Elizabeth Wildsmith, Nicole R. Steward-Streng and Jennifer Manlove, “Childbearing Outside of Marriage: Estimates and Trends in the United States”, Child Trends Research Brief (November 2011).
2See Giles Slade, “Electric Company”, The American Interest (September/October 2010).
Original article here
Referring blog article here
Related video here
Unlike other houses on Australia’s Indigenous lands, two-bedroom houses are being built privately for Baniyala families.
Home, sweet home
Something unusual is happening in Baniyala in East Arnhem Land, Northern Territory. Footings are being poured this week for two new houses. Unlike other houses on Australia’s Indigenous lands, these two-bedroom houses are being built privately for Baniyala families.
Although Australian governments are building public houses in Indigenous townships, no new public houses are being built in homelands / outstations. On average, it costs more than $600,000 to build a three-bedroom public house.
Rents in these Indigenous NT townships are based on the occupants’ ability to pay. For example, if there are 13 people (say 8 adults and 5 children) living in a three-bedroom house, the government assesses each adult’s capacity to pay based on welfare, pensions and other benefits the occupants receive. Some may pay $20 per fortnight while others $60. This way, an average rent of $400–$450 per fortnight is collected from the (overcrowded) house.
No new houses have been built in Baniyala for almost 20 years, and the existing ‘dwellings’ don’t have kitchens or bathrooms. Darwin houses have to meet cyclone standards, but no standards at all apply for houses built on Indigenous lands in the Northern Territory.
Indigenous land is private property, and the Baniyala community is courageously grappling with the knowledge that it is their responsibility to organise and pay for new houses. But governments have not supported private housing on Indigenous lands. Unable to get title (99-year leases) on their traditional land, Baniyala families are denied benefits such as the $7,000 First Home Owners Grant and the $10,000 NT Build Bonus grant other Australians receive.
In March 2012, the Baniyala community lodged a petition in the Senate asking the Minister for Indigenous Affairs, Jenny Macklin, to help them get leases for their houses and that ‘benefits enjoyed by non-Indigenous Australians should not be denied to us because we live on Indigenous land.’ They have not yet received a reply.
In contrast to expensive public housing, the privately built two-bedroom houses in Baniyala will cost only about $100,000 each. Indigenous families living in remote areas – even those on welfare – have enough income to pay rent or mortgage repayments on these houses. The houses will have a fixed rent or mortgage to recover costs, regardless of who lives there.
After generations of decrepit public housing, Baniyala families will finally have the option of building and living in new houses built to capital city standards.
Emeritus Professor Hughes is a Senior Fellow at The Centre for Independent Studies and Mark Hughes is an independent researcher. With Sara Hudson, they wrote the report Private Housing on Indigenous Lands in 2010.
Since 2005, CIS has been supporting the Baniyala community – initially to build a real school for their children and now with housing and jobs.
From CIS ideas@TheCentre
The Denial of Private Property Rights to Aborigines
Helen Hughes & Mark Hughes
It’s remarkable, then, to think that when the Word was made flesh with the birth of Jesus, it was through the intercessory work of a private businessman - the inn keeper
By Llewellyn H. Rockwell, Jr.
24 December, 2011
At the heart of the Christmas story rests some important lessons concerning free enterprise, government, and the role of wealth in society.
Let’s begin with one of the most famous phrases: “There’s no room at the inn.” This phrase is often invoked as if it were a cruel and heartless dismissal of the tired travelers Joseph and Mary. Many renditions of the story conjure up images of the couple going from inn to inn only to have the owner barking at them to go away and slamming the door.
In fact, the inns were full to overflowing in the entire Holy Land because of the Roman emperor’s decree that everyone be counted and taxed. Inns are private businesses, and customers are their lifeblood. There would have been no reason to turn away this man of royal lineage and his beautiful, expecting bride.
In any case, the second chapter of St. Luke doesn’t say that they were continually rejected at place after place. It tells of the charity of a single inn owner, perhaps the first person they encountered, who, after all, was a businessman.
His inn was full, but he offered them what he had: the stable. There is no mention that the innkeeper charged the couple even one copper coin, though given his rights as a property owner, he certainly could have.
It’s remarkable, then, to think that when the Word was made flesh with the birth of Jesus, it was through the intercessory work of a private businessman.
Without his assistance, the story would have been very different indeed.
People complain about the “commercialization” of Christmas, but clearly commerce was there from the beginning, playing an essential and laudable role.
And yet we don’t even know the innkeeper’s name. In two thousand years of celebrating Christmas, tributes today to the owner of the inn are absent.
Such is the fate of the merchant throughout all history: doing well, doing good, and forgotten for his service to humanity.
Clearly, if there was a room shortage, it was an unusual event and brought about through some sort of market distortion.
After all, if there had been frequent shortages of rooms in Bethlehem, entrepreneurs would have noticed that there were profits to be made by addressing this systematic problem, and built more inns.
It was because of a government decree that Mary and Joseph, and so many others like them, were traveling in the first place. They had to be uprooted for fear of the emperor’s census workers and tax collectors.
And consider the costs of slogging all the way “from Galilee, out of the city of Nazareth, into Judea, unto the city of David,” not to speak of the opportunity costs Joseph endured having to leave his own business.
Thus we have another lesson: government’s use of coercive dictates distort the market.
Moving on in the story, we come to Three Kings, also called Wise Men. Talk about a historical anomaly for both to go together! Most Kings behaved like the Roman Emperor’s local enforcer, Herod. Not only did he order people to leave their homes and foot the bill for travel so that they could be taxed. Herod was also a liar: he told the Wise Men that he wanted to find Jesus so that he could “come and adore Him.” In fact, Herod wanted to kill Him. Hence, another lesson: you can’t trust a political hack to tell the truth.
Once having found the Holy Family, what gifts did the Wise Men bring? Not soup and sandwiches, but “gold, frankincense, and myrrh.” These were the most rare items obtainable in that world in those times, and they must have commanded a very high market price.
Far from rejecting them as extravagant, the Holy Family accepted them as gifts worthy of the Divine Messiah. Neither is there a record that suggests that the Holy Family paid any capital gains tax on them, though such gifts vastly increased their net wealth. Hence, another lesson: there is nothing immoral about wealth; wealth is something to be valued, owned privately, given and exchanged.
When the Wise Men and the Holy Family got word of Herod’s plans to kill the newborn Son of God, did they submit? Not at all. The Wise Men, being wise, snubbed Herod and “went back another way” – taking their lives in their hands (Herod conducted a furious search for them later). As for Mary and Joseph, an angel advised Joseph to “take the child and his mother, and fly into Egypt.” In short, they resisted. Lesson number four: the angels are on the side of those who resist [evil] government.
In the Gospel narratives, the role of private enterprise, and the evil of government power, only begin there. Jesus used commercial examples in his parables (e.g., laborers in the vineyard, the parable of the talents) and made it clear that he had come to save even such reviled sinners as tax collectors.
And just as His birth was facilitated by the owner of an “inn,” the same Greek word “kataluma” is employed to describe the location of the Last Supper before Jesus was crucified by the government.
Thus, private enterprise was there from birth, through life, and to death, providing a refuge of safety and productivity, just as it has in ours.
Original article here