Andrew Odlyzko's latest is a short rant on DRM. Consider it in light of the recent Rick Rubin profile, The Music Man in the NYT Magazine.
The fundamental issue that limits current use and future prospects of DRM is that, in the words of [10],
The important thing is to maximize the value of your intellectual property, not to protect it for the sake of protection.
DRM all too often gets in the way of maximizing the value of intellectual property.
People are very frequently willing to pay more for flat rate plans than they are for metered ones, even if their usage does not change. The trend towards flat rate plans is not universal, and there is likely to be a spectrum of charging schemes. Flat rate plans are likely to dominate for inexpensive and frequently purchased goods and services, and extreme examples of differential pricing are likely to prevail for expensive and seldom-purchased things, see [4] for a discussion and evidence.
But overall, we should expect to see growth in flat rate pricing and bundling (as in subscriptions to magazines, or in a collection of cable channels for a single price). In addition to a willingness to pay more for flat rate plans, people tend to use more of a good or service that does not involve fine-scale charging or decision making. Typical increases in usage are from 50% to 200% when users are switched from metered to flat rates. Depending on whether one wishes to increase or decrease usage, this may or may not be desirable, but in the case of information goods, the overwhelming incentive is to increase usage. This provides yet another incentive to avoid fine-grained pricing and control that DRM is often designed for.
What we are likely to end up with is a huge universe of free material, much of it of little interest to all but a handful of people.
Q: It seems that ... I can't get through more than 15 minutes of work without someone interrupting me, and then I lose my train of thought.
...
A: What you really want is a way to remember what you were doing when you were doing it, and debugging is one of the best examples of this. Nothing is more annoying than to come back to a problem you were working on and not remember what you had already tried.
Real scientists, as opposed to lame hacks who claim to be scientists, know how to formulate ideas - called hypotheses - and test them. They write down each hypothesis, then describe the experiment and the results. They keep all of this data in logbooks or notebooks.
Just because your test passes or your code doesn't crash doesn't mean that you have completed your debugging.
VoIP hacker talks: Service provider nets easy pickings
Topic: Technology
8:00 pm EDT, Sep 3, 2007
A combination of simple dictionary and brute-force attacks in combination with Google hacking enabled a criminal pair to break into VoIP-provider networks and steal $1 million worth of voice minutes, says one of the duo who has pleaded guilty to his crimes.
He designed software to generate 400 prefixes per second against the carrier gear, scanning all the combinations between 000 and 999 randomly to throw off intrusion-detection systems (IDS) that might pick up a sequential attack.
"Most of the telecom administrators were using the most basic password. They weren’t hardening their boxes at all."
He also wrote search strings that he fed into Google seeking exposed Web interfaces on devices, and that proved fruitful as well.
Software developer perceptions about software project failure
Topic: Technology
11:24 am EDT, Sep 1, 2007
The last sentence caught my eye:
Software development project failures have become commonplace. With almost daily frequency these failures are reported in newspapers, journal articles, or popular books. These failures are defined in terms of cost and schedule over-runs, project cancellations, and lost opportunities for the organizations that embark on the difficult journey of software development. Rarely do these accounts include perspectives from the software developers that worked on these projects.
This case study provides an in-depth look at software development project failure through the eyes of the software developers. The researcher used structured interviews, project documentation reviews, and survey instruments to gather a rich description of a software development project failure.
The results of the study identify a large gap between how a team of software developers defined project success and the popular definition of project success. This study also revealed that a team of software developers maintained a high-level of job satisfaction despite their failure to meet schedule and cost goals of the organization.
Subscription required for access to full text, but there are at least 4 versions of the paper available.
And here are two versions of a video you're sure to enjoy (again, and again, and again):
and George Riley have collaborated with Xenofontas Dimitropoulos and Amin Vahdat on a new paper.
The coarsest approximation of the structure of a complex network, such as the Internet, is a simple undirected unweighted graph. This approximation, however, loses too much detail.
In reality, objects represented by vertices and edges in such a graph possess some non-trivial internal structure that varies across and differentiates among distinct types of links or nodes.
In this work, we abstract such additional information as network annotations. We introduce a network topology modeling framework that treats annotations as an extended correlation profile of a network.
Assuming we have this profile measured for a given network, we present an algorithm to rescale it in order to construct networks of varying size that still reproduce the original measured annotation profile.
In this column I'd like to informally respond to this call for participation and revise some earlier thoughts I had on trust to see if we've made any significant progress on the issue of trust in the Internet in the past four years.
Jumping to the end:
So is trust the universal answer?
The problem for me is that "trust" is not that much different from blind faith, and, in that light, "trust”"is not a very satisfying answer. The difference between "fortuitous trust" and "misplaced trust" is often just a matter of pure luck, and that’s just not good enough for a useful and valuable network infrastructure. "Trust" needs to be replaced with the capability for deterministic validation of actions and outcomes of network-based service transactions. In other words, what is needed is less trust and better security.
The Internet Society (ISOC) Board of Trustees is currently engaged in a discovery process to define a long term Major Strategic Initiative to ensure that the Internet of the future remains accessible to everyone. The Board believes that Trust is an essential component of all successful relationships and that an erosion of Trust: in individuals, networks, or computing platforms, will undermine the continued health and success of the Internet.
The Board will meet in special session the first week October of 2007 for intensive study focused on the subject of trust within the context of network enabled relationships. As part of this process, the Board is hereby issuing a call for subject experts who can participate in the two day discussion. Topics of interest include: the changing nature of trust, security, privacy, control and protection of personal data, methods for establishing authenticity and providing assurance, management of threats, and dealing with unwanted traffic.
The Google Books Project has drawn a great deal of attention, offering the prospect of the library of the future and rendering many other library and digitizing projects apparently superfluous. To grasp the value of Google’s endeavor, we need among other things, to assess its quality. On such a vast and undocumented project, the task is challenging.
In this essay, I attempt an initial assessment in two steps.
First, I argue that most quality assurance on the Web is provided either through innovation or through “inheritance.” In the later case, Web sites rely heavily on institutional authority and quality assurance techniques that antedate the Web, assuming that they will carry across unproblematically into the digital world. I suggest that quality assurance in the Google’s Book Search and Google Books Library Project primarily comes through inheritance, drawing on the reputation of the libraries, and before them publishers involved.
Then I chose one book to sample the Google’s Project, Lawrence Sterne’s Tristram Shandy. This book proved a difficult challenge for Project Gutenberg, but more surprisingly, it evidently challenged Google’s approach, suggesting that quality is not automatically inherited.
In conclusion, I suggest that a strain of romanticism may limit Google’s ability to deal with that very awkward object, the book.
In a digitally connected, rapidly evolving world, we must transcend the traditional Cartesian models of learning that prescribe “pouring knowledge into somebody’s head." We learn through our interactions with others and the world ...
As opportunities for innovation and growth migrate to the peripheries of companies, industries, and the global economy, efficiency will no longer be enough to sustain competitive advantage. The only sustainable advantage in the future will come from an institutional capacity to work closely with other highly specialized firms to get better faster.
A new paper by Mark Allman, for an upcoming conference.
Incessant scanning of hosts by attackers looking for vulnerable servers has become a fact of Internet life.
In this paper we present an initial study of the scanning activity observed at one site over the past 12.5 years.
We study the onset of scanning in the late 1990s and its evolution in terms of characteristics such as the number of scanners, targets and probing patterns.
While our study is preliminary in many ways, it provides the first longitudinal examination of a now ubiquitous Internet phenomenon.