Are Law Students Dumber? Law Profs See Damage Done by NCLB

One professor at a top-20 law school recently confided that he has to teach his students how to write business letters. A professor at another elite school complained that grading exams is far more difficult now because the writing skills of students are so deficient that each exam requires several reads. Bernstein’s article suggests that he knows what accounts for this—federal education policy. He  may be right.

Teaching to the test overshadows (if not supplants) teaching critical thinking, higher-order reasoning, and the development of creative-writing skills. As Bernstein emphasizes, contemporary teaching or teaching to the test does not “require proper grammar, usage, syntax, and structure.” In fact, those skills may be perceived as unimportant in this modern age—as many of the tests taken by K-12 students employ multiple choice, and those that require essays grade on a rubric that pays little if any attention to the quality of writing.

Law Professors See the Damage Done by ‘No Child Left Behind’ – The Conversation – The Chronicle of Higher Education.

Something to add to the pile facing legal education today: students may not be as smart as they used to be. And that’s a problem because the law is more complex today than ever and requires extraordinary analytical and critical thinking skills. If you show up at law school lacking the necessary skill set, you will not do well.

As the father of 2 teenagers I can tell you that even in the best public schools “teaching to the test” is a great problem. Bright kids hit high school without a lot of writing and independent thinking skills and aren’t learning or even working on those skills there. I can certainly see where issues are going to come up in higher education as these kids move forward.

Be sure to read the comments following the Chronicle piece, there is some good stuff in there.

Reflections on #ReInventLaw and Some Thoughts on #ReInventLegalEdu

I spent Friday March 8 at one of the most interesting conferences I’ve ever attended. The ReInvent Law Silicon Valley conference promised 40 speakers in 9 hours, a day of the best in innovation in law practice, courts, and more. It certainly delivered. Much has been written about specific sessions and the ideas presented, so I’ll skip that sort of analysis. Instead I’d like to give you my impressions a few days after the conference and after having let it sink in a bit.

It is now apparent to me that there is a major structural shift going on in the way law, especially “big law”, is being practiced. Practice and the courts are embracing many aspects of technology as part of doing business. Lawyers and judges are looking to technology to increase efficiency, automate rote tasks, and create space, physical and virtual, for more personal interaction between attorneys, clients, judges, parties, and legal consumers. There is a growing awareness that the delivery of some basic legal services can be delivered by non-lawyers and that there is a great unmet need for legal services among those who cannot afford to pay hundreds of dollars an hour for a lawyers time.

As with all structural shifts like this, there are barriers and points of resistance. A major barrier to the shift is ABA Rule 5.4 governing the professional independence of a lawyer. This rule effectively prohibits the investment of outside capitol in law firms or organization engaged in the practice of law. Without access to outside capital and business structures practicing attorneys will have a difficult time taking full advantage of the opportunities presented by increasing the use of technology. They will get there eventually, but the rate of change would greatly accelerated with an infusion of outside capital.

Of course it isn’t just a practice rule that is slowing down the ReInvent Law movement. Lawyers themselves need to examine the way they have traditionally structured practice. The billable hour came under criticism just as often as Rule 5.4 during the presentations. The use of billable hours builds inefficiency into the system and was cited as a strong demotivator of efficiency in practice. Indeed the wisdom of continuing the “big law” model of life tenure partnerships was questioned. A more flexible structure with compensation linked to actual outcomes is seen as a better model to deal with the swift changes in technology confronting the legal profession.

And what about legal education? While the gathering included members of the legal academy both as presenters and in the audience (the organizers of the event are professors at Michigan State University College of Law), the focus of day was really on the practice of law and to a lesser extent the courts. Reform or reinvention of legal education was the topic of only a couple of the presenters and this was a shame because we could really use a #ReInventLegalEdu movement right about now.

There is little doubt that legal education in the US is facing a crisis of its own. Enrollment is down, applications are down, graduate employment is down, and debt load among graduates is high. On top of that the nature of the practice of law is under going a structural change. As law schools struggle with the crisis I think they are at risk of increasing the disconnect between the academy and practice especially if they continue to turn out graduates who are being educated for a style of practice that is disappearing.

The challenge to law schools, in the face of this crisis, is to embrace the changes moving through practice and the courts and #ReInventLegalEdu. There is an opportunity here for law schools to change, to embrace the technology and methodologies heralded by ReInvent Law, to provide the research and development that the practice and courts will need, and to graduate more tech savvy lawyers. I think change is coming to legal education and law schools have a choice, lead the charge to a new legal practice future or have the change imposed on them by the practice and courts. I’m hoping something like #ReInventLegalEdu helps to lead the way forward.

 

An Experiment in Document Conversion and Generation

This is the README file for the Github repository that holds the files used and created in this experiment. I’m including the README in its entirety since it kills 2 birds with 1 stone.


1. Introduction

This repo holds a set of files that I created as an experiment in getting old work out of proprietary formats. The idea is to take a MSFT Word file and convert it into something that is human readable, open formatted, and convertible.

To do this is I settled upon AsciiDoc to mark up the text of the paper. I chose AsciiDoc over Markdown because of the depth of features and availability of conversion tools.


2. The Process

I decided to use a local install of Etherpad Lite (EL) as my primary text editor for this project. I did this because of a few features including autosave, versioning, and the potential for real time collaboration. I hoped that these features would provide me with a useful editing tool.

Once EL setup and configured I was faced with the problem of how to get the text of the paper into the editor in the first place. My initial inclination was to retype the document, formating and editing as I went along. Faced with a 10,000 word doc and no appreciable typing skills, I was not happy with this option. After a bit of poking around in EL I found its import features. To get MSFT Word files imported required a bit more configuring, but it worked. I then imported the Word file into EL.

The import process added the text of the document to the editor. It stripped all of the formatting from the text and inserted the 112 footnotes in-line into the text. All of this was actually a good thing, making the process of marking up the doc with AsciiDoc easier. Using the original word processing file as a guide I worked through the document adding the necessary AsciiDoc markup to format the paper. The most tedious part was the 112 footnotes, but since AsciiDoc handles footnote with in-line markup it moved along as fast as could be expected.

In total I spent about 6 hours working on the AsciiDoc version of the document. The most time was spent tagging footnotes and figuring out the format for the bibliography
[I am still not really pleased with the way the biblio looks. I think I can fix though on a later iteration.]
The rest of the formating such as section titles, quotes, emphasis, and lists was straight forward though I did keep a copy of the AsciiDoc User Guide open in another tab to help out.

I found the Etherpad Lite interface easy to work with and really appreciated the autosave and versioning features. EL doesn’t know about AsciiDoc markup though so that presented a challenge. In order to preview the work I had to export the file as text and then do the basic AsciiDoc to HTML, opening the resulting file in another browser tab to see what was going on. As I became more confident of my work, I checked less often so this was not much of an issue. I marked major revisions as saved revisions at the end of section of the document to give me a nice clean revision history.

Once I had a nice clean version that produced good HTML, I exported a final copy to my local computer and set about using the AsciiDoc utility a2x to generate the document in various formats. For this particular experiment I went with XHTML, PDF, and EPUB. The generation/conversion process was marred only by my problems with understanding the format for the bibliography at the end of the document. Once I figure out just how to mark up the bibliography process was flawless. a2x first converts the AsciiDoc marked document into a DocBook XML file and then converts the DocBook file into other formats. The process uses the standard set of XML processing tools as well as CSS to generate the files. By using custom CSS files, the layout and formating of the various output files can be changed as needed.


3. The Files

The files included in this repo are the ones used and generated as part of the process described above.

KELSOFIN20130111.docx The MSFT Word file that was used for the starting point. This document began as a WordPerfect file in 1992 and was moved to Word in the mid-90’s.
KelsoPaper.txt This is the AsciiDoc version of the file as created and edited in Etherpad Lite. This is the file used to generate the other formats.
KelsoPaper.pdf PDF file generated from KelsoPaper.txt using the command a2x -v -f pdf KelsoPaper.txt
KelsoPaper.html XHTML file generated from KelsoPaper.txt using the command a2x -v -f xhtml KelsoPaper.txt
docbook-xsl.css CSS file used to style KelsoPaper.html
KelsoPaper.epub EPUB file generated from KelsoPaper.txt using the command a2x -v -f epub KelsoPaper.txt

4. Conclusion

I am happy with the results of this experiment and hope to be able to further explore the use of Etherpad Lite and AsciiDoc as a tool set for creating free and open documents.

Hackthelaw: Piratebox meets Free Law

There are few “down” times in the CALIverse, but the Christmas through New Year holiday break is one of them. I use the time to do updates and upgrades and installs that would be disruptive at other times of the year. I also use the quiet stretches to try out new things. One of the new things I took a shot at this break is building a PirateBox.  A PirateBox is:

 Inspired by pirate radio and the free culture movements, PirateBox utilizes Free, Libre and Open Source software (FLOSS) to create mobile wireless communications and file sharing networks where users can anonymously chat and share images, video, audio, documents, and other digital content.

— http://wiki.daviddarts.com/PirateBox

I grabbed an old Asus Eee PC net book that runs Debian Linux and followed the instructions on the wiki. The setup was pretty straightforward, but it is important to remember that you are disconnecting the wireless on the pc from the Internet and using it to create an access point of its own so once you launch the PirateBox script you no longer have Internet access via wireless. I decided to call my version hackthelaw.

Once I had it up and running there was the matter of content. As it happens I have a lot of free law laying around (occupational hazard). I was casting about for a USB thumb drive to load stuff onto when I remember the great Free Law Reporter thumb drive that we did for CALIcon11. It contains LOTS of court opinions in EPUB format and seemed like a perfect starting point for downloads. I took one of the FLR drives and added all of the eLangdell ebooks (all formats), some choice gov docs from FDsys including the US Code, and the EPUB version of the Delaware state code. I plugged this into hackthelaw and had a very nice collection of law that could be downloaded to anyone who connects to hackthelaw.

If you’re still with me, you’re probably asking yourself, “So, what does all this mean to me?” Well, that’s a good question. The hackthelaw box is an open, anonymous network stocked with primary and secondary legal materials that are freely available for download.  People can connect to the network and download any of the materials as well as chat with others connected to the network. All this is in a closed network space separate from the Internet.  I can easily imagine setting this up in a library as a way for folks to access legal materials and even ask basic questions about the resources.  Any device that has WiFi can connect to the network, so folks could download materials directly to their phones or tablets as well as laptops. Consider hackthelaw as another Free Law access point.

Beyond being a distribution node for Free Law, devices like hackthelaw have potential uses in legal education and practice. A closed private network could be used to distribute and receive law school exams. A professor could launch a network at the beginning of a class to provide students with that day’s material. In practice such a device could be used for gather initial client intake information. In conferences or negotiations a private network could handle the exchange of documents between parties. There are lots of possibilities here, and, as time becomes available, I hope to be looking into some of them in the not too distant future.

If you’re interested, I’ll be running some sort of hackthelaw device at the CALI booth in the AALS exhibit hall in New Orleans, January 4 -6, 2013.

 

 

Some Quick Drupal Troubleshooting Tips

You don’t have to use Drupal (or any other user configurable software package for that matter) for very long before you do something that causes it to do something unpleasant. While answering one of those “why is it doing that ” type of questions this afternoon it occurred to me that there are a number of rules/tips I follow when working with Drupal.

Following these little tips makes life easier will help you maintain some sanity while getting some work done.

  •  Always use the admin or user 1 account (the first one you created when you installed the site) when turning modules on and off. Sometimes modules have odd permissions settings that require the super user to do the install but they don’t bother to tell you that or give any useful error when it goes wrong.
  • Always clear the cache when you install modules, change/add content types, create/edit views or panels, make any theme changes or do any sort of updates. Drupal caches lots of stuff and it’s easier to clear to the cache manually then it is to try and guess if what you just did is cached or not. I usually don’t worry about the warnings about clearing the cache affecting performance because most sites are small and lightly used enough for it to not make that much of a difference.
  • Install and test modules one at a time. Don’t install a bunch of stuff and then go back and start testing because if things go wrong troubleshooting is easier if you’ve only just changed one thing. Of course some modules require others, so you’ll have to install groups of modules at some point, but remember to turn them all off if something goes amiss.
  • Don’t monkey with the Drupal database unless you REALLY know what you’re doing. Something as simple as saving a blog post in Drupal can touch a lot of tables. Even if you are seeing database errors, it’s usually a better idea to try everything you can to clear them up from the Drupal admin pages first.
  • Keep a second  (or third) browser handy so that you can have an anonymous view of your Drupal site while you’re developing. This is really handy when those pesky caching problems pop up. Also helps avoid those “I can see it, why can’t you” problems that crop up when someone actually tries to visit your site.

I’m sure there are more so feel free to add them in the comments.

 

Stallman Points Out Problems With CC-BY-NC, CC-BY-NC-SA Licenses For Edu Works

Prominent universities are using a nonfree license for their digital educational works. That is bad already, but even worse, the license they are using has a serious inherent problem.

When a work is made for doing a practical job, the users must have control over the job, so they need to have control over the work. This applies to software, and to educational works too. For the users to have this control, they need certain freedoms (see gnu.org), and we say the work is “free” (or “libre”, to emphasize we are not talking about price). For works that might be used in commercial contexts, the requisite freedom includes commercial use, redistribution and modification.

Creative Commons publishes six principal licenses. Two are free/libre licenses: the Sharealike license CC-BY-SA is a free/libre license with copyleft, and the Attribution license (CC-BY) is a free/libre license without copyleft. The other four are nonfree, either because they don’t allow modification (ND, Noderivs) or because they don’t allow commercial use (NC, Nocommercial).

In my view, nonfree licenses are ok for works of art/entertainment, or that present personal viewpoints (such as this article itself). Those works aren’t meant for doing a practical job, so the argument about the users’ control does not apply. Thus, I do not object if they are published with the CC-BY-NC-ND license, which allows only noncommercial redistribution of exact copies.

Use of this license for a work does not mean that you can’t possibly publish that work commercially or with modifications. The license doesn’t give permission for that, but you could ask the copyright holder for permission, perhaps offering a quid pro quo, and you might get it. It isn’t automatic, but it isn’t impossible.

However, two of the nonfree CC licenses lead to the creation of works that can’t in practice be published commercially, because there is no feasible way to ask for permission. These are CC-BY-NC and CC-BY-NC-SA, the two CC licenses that permit modification but not commercial use.

The problem arises because, with the Internet, people can easily (and lawfully) pile one noncommercial modification on another. Over decades this will result in works with contributions from hundreds or even thousands of people.

via On-line education is using a flawed Creative Commons license.

This is a larger quote than I usually use, but Richard Stallman has a very important point here. By attaching the NC (No Commercial) attribute to a Creative Commons license you preclude the possibility of a commercial use ever, even in the future because once the work has been modified a few times it will become too burdensome, if not down right impossible to track down all of the rights holders to get agreement on a commercial use of the work.

To be honest it never occurred to me that  using the NC attribute could ever have such an effect. I saw it as a way to require someone who wanted to use a work commercially to come forward to the rights holder and ask specific permission for a commercial license. That remains true only so long as the work hasn’t been modified. Once the work is modified and shared as required by the use of the Share-Alike (SA) attribute then anyone wanting to make a commercial use of the work would need to trace back the chain of rights holders to get the necessary permissions.

In the educational world it is easy to imagine CC licensed works being modified and used over and over again as they pass through the hands of hundred or thousands of teachers and students. Getting permission for commercial use of work that has been authored by hundreds of people over a span of years would be pretty much impossible. As Stallman points out “[f]or works that might be used in commercial contexts, the requisite freedom includes commercial use, redistribution and modification.” Here this means that the NC attribute should not be used because it removes the freedom to make a commercial use of the work because even though commercial use is technically possible, it is practically impossible.

If the goal of creators of open education resources is to create free/libre resources that are available to all, to make education better and more widely available, then the NC attribute should be avoided in setting Creating Commons licenses for education works. Does this mean that someone could take a work and sell it rather than providing it for free? Yes it does, but it isn’t likely since it is hard to compete with free. Does it mean that someone could take a work, modify it, and sell it? Again yes, but then it is up to the market to decide if the modifications represent an added value that makes it worth more than the freely available version. No matter what I think having free/libre and open educational resources out weighs the need to lock them up in restrictive licensing.

Ars Technica Reviews a New Developer Focused Dell Linux Ultrabook and They Like It

Earlier this year, [Dell] announced a pilot program, “Project Sputnik,” intended to produce a bona fide, developer-focused Linux laptop using their popular XPS-13 Ultrabook as base hardware. The program turned out to be a rousing success, and this morning Dell officially unveiled the results of that pilot project: the Dell XPS 13 Developer Edition.
The XPS 13 used in the Developer Edition features a number of upgrades over the pilot Project Sputnik hardware, including an Intel i5 or i7 Ivy Bridge CPU and 8GB of RAM the pilot hardware used Sandy Bridge CPUs and had 4GB of RAM. The Developer Edition also comes with a 256 GB SATA III SSD, and retains the pilot versions 1366×768 display resolution. The launch hardware costs $1,549 and includes one year of Dells “ProSupport.” Additional phone support options arent yet available.
The laptop comes with Ubuntu Linux 12.04 LTS plus a few additions. Dell worked closely with Canonical and the various peripheral manufacturers to ensure that well-written, feature-complete drivers are available for all of the laptops hardware. Out of the box the laptop will just work. They also have their own PPA if you want to pull down the patches separately, either to reload the laptop or to use on a different machine.

via Dell releases powerful, well-supported Linux Ultrabook | Ars Technica.

Important additions to the pre-installed Ubuntu 12.04 include to new Dell sponsored open source projects, Profile Tool and Cloud Launcher, designed to make life easier for developers. Overall this sounds like an excellent machine for serious developers, especially those looking for an alternative to the Apple world.

 

Notes from Drupalcamp Atlanta 10/27/12

These are my notes from dcATL.
  • Josh Clark @globalmoxie
  • The mobile future
  • Mobile is a new platform. What do we do with the new platform?
  • How do we do more with mobile?
  • Sensors give us super powers.
  • Mobile provides the opportunity to interpret the environment, think of augmented reality.
    • Think of ways to use camera and audio in classroom, like prof mentions case and it pop ups on device.
  • Table Drum app usess augmented audio.
  • AnyTouch turns everyday objects into interface objects.
  • Leap Motion moves touch interface into 3d space, natural gestures.
  • Natural gestures are the next break through in interfaces.
  • We need to design for natural gestures.
  • Windows 8 is intended to work with any input interface. Hugely challenging.
  • Medical field is using all sorts of special sensors with mobile devices to drive data collection.
  • Personal sensors make sense of our environment.
  • But we don’t need more operating systems, interfaces.
  • Remote control is an answer.
  • Ambiguous control among devices is coming, think of phones in cars. Your car rings. When you park the car, the interface follows you. Migrating interface.
  • http://bit.ly/day-glass– A day made of glass from Corning.
    • One smart device somewhere that is driven by ambiguous interfaces
  • Wii U
  • Grab Magic http://bit.ly/grab-magic
  • http://bitly.com/proto-gestures
  • Sifteo cubesare social toys.
    • Download software as it needs it.
  • Web is just in case, everything is loaded in case we need it. Needs to move to just in time, software loaded when we need it.
  • Passive interfaces just work on their own, doing the things they need to do to perform the functions they are designed to do.
  • Devices will get both dumber and smarter.
  • Metadata is the new art direction – Ethan Resnick @studip101
  • A cloud of social devices
  • Look beyond the interface, beyond the device, the presentation to the content and the services.
  • Push sensors
  • Think social not FB
  • Your ecosystem
  • We’re all cloud developers
  • Mind your metadata
  • New input methods
  • The future is here
  • Eric Webb @erikwebb
  • See slideshare
  • Evaluating modules
    • Supported version, maintainer rep, usage, # of open issues, usage over time.
    • Record before and after install using Devel module
    • Search for tag ” performance ” to weed out general issues.
    • What to look at
      • When does it run?
      • How does it scale?
      • What if it fails?
      • Does my site care?
      • Do I need this module?
    • ID the problem
    • Where problems occur
      • Page building like views and panels
      • External web services
      • Overall complexity
        • Views in panels in panels….
      • Misconfigured components
    • Keep records, establish a metric, adopt a definition of done, don’t hide behind infrastructure
  • Types of caching
    • App level caching is not really configurable. Tings like menus, forms
    • Component level caching, user facing stuff like blocks, views, panels
      • Best to speed up for authenticated users
    • Page level caching is important mostly for anon users
  • Configuring Drupal

  • Randall Kent @randallkent rkent@sevaa.com
  • http://bit.ly/dcatl-services
  • Web services as the tip of the iceberg.
  • REST is the key to getting at the stuff in Drupal. REST is one way to create an API on Drupal.
  • REST
    • built on http
      • GET, POST, PUT, DELETE
    • Client/Server
      • Separates ui from data storage
    • Stateless
      • All info necessary to process request must be included in the request itself
    • Cacheable
    • Layered
    • Uniform interface
  • /myapi/node – gets XML
  • /myapi/node.json – get JSON
  • REST console for Chrome
  • http://github.com/randallkent
    • DrupalREST.php
    • DrupalREST.net
  • See http://drupanium.org
  • David Bassendine @dbassendine
  • Open data, social, business tools
  • Few modules for consuming services
  • Always start with looking on line for a module
  • REST vs SOAP
  • Get to know the API you are working with
    • URL and path structure
    • Testing in browser for GET, POST requires extension/plugin
  • Services client for D7 will consume Services from another Drupal instance
  • REST API and Query API handle some RESTful APIs that serve json
    • See red mine module for example
  • Core HTTP API for other services
    • drupal_http_request($url,$options(headers,methods,data))
    • Slightly diff D6 & D7
  • Last 2 require custom modules to do the work
  • Krumo – http://krumo.sourceforge.net/
  • Talking to Web Services – Resources

  • Matthew Connerton @connerton
  • AJAX allways for there fresh of data in the browser page with refreshing the whole page.

    Sample code for AJAX in Drupal7
  • Replaces AHAH, which is a good thing. Pulls lots in crooks stuff
  • “use-ajax” class
    • drupal_add_library(drupal.ajax) to get Ajax in.
    • Pulls jquery in
  • $form[‘#ajax’]
    • drupal_add_library(drupal.ajax) to get Ajax in.
    • Blur is the default trigger.
  • It’s may ease the pain of the auth code stuff.
  • Check Drupal API for AJAX Framework docs.
    • includes/ajax.inc
  • Using #states in Form API
  • Ctools modal to open modal boxes for editing and such.
    • “ctools-use-modal” class
  • Doug Vann dougvann.com
  • Module filter is cool
  • DraggableViews
    • Makes rows of views draggable
    • Can be rearranged by drag and drop
    • Has AJAX
    • No relationship required
    • Could use this to provide a sort on Lesson topics based on order in the topic grid
    • Use this to rearrange stuff on the topic list view itself on the home page
    • No subsets or at least not easily handled
  • Nodequeue
    • Collect nodes in an arbitrary order
    • Requires relationship in order to bring stuff into proper scope


U of Minnesota Releases “Cultivating Change in the Academy”, Highlights Future of the Book

This collection of 50+ chapters showcases a sampling of academic technology projects underway across the University of Minnesota, projects that we hope inspire other faculty and staff to consider, utilize, or perhaps even develop new solutions that have the potential to make their efforts more responsive, nimble, efficient, effective, and far-reaching. Our hope is to stimulate discussion about what’s possible as well as generate new vision and academic technology direction. The work underway is most certainly innovative, imaginative, creative, collaborative, and dynamic. This collection of innovative stories is a reminder that we are a collection of living people whose Land Grant values and ideas shape who we serve, what we do, and how we do it. Many of these projects engage others in discourse with the academy: obtaining opinion or feedback, taking the community pulse, allowing for an extended discourse, and engaging citizens in important issues. What better time to share 50+ stories about cultivating change than in 2012 – the 150th anniversary of the founding of the Land Grant Mission!

via University of Minnesota Digital Conservancy: Cultivating Change in the Academy: 50+ Stories from the Digital Frontlines at the University of Minnesota in 2012.

Produced in just 10 weeks, this book is a snapshot of academic technology projects and research underway at the University of Minnesota. Of more interest to me than the speed with which it was produced or the subject matter are the formats in which the book was released. First, it is a blog and a website. Each chapter is a post with the text of the chapter embedded as a PDF file. The blog has commenting enabled, RSS feeds and its own Twitter hashtag, #CC50, so that readers may engage the authors in ongoing discussion.  Second, the work is available in EPUB, .mobi, and PDF formats so you can read it on the platform of your choice. The work carries a Creative Commons Attribution- NonCommercial- ShareAlike 3.0 Unported License.

As I’ve stated in a prior post I think the future of books, especially textbooks and other educational materials lies on the web, not locked into some closed or crippled format. This book serves as an excellent example of the future of the book.

Tap Here To Begin Writing…

“Tap here to begin writing”. That is the very simple instruction given in the WordPress app on the iPad when you hit the new post button. If it were only that simple. Writing is hard for me. I know people who write every day, and they often make it look simple. For me writing is a bit of a struggle. Not that I don’t have things to say, I do, but sitting down to put my thoughts on “paper” does not come easy for me.

I’ve had a blog for over 10 years now and it has served mostly as a sort of digital scrapbook, a place for me to stash links and snippets of sites that caught my attention. Mixed in there is some actual writing, but I’ve probably thrown away more than I’ve posted. Weirdly the blog has often acted like a sort of deterrent to writing, a looming presence were I should be writing, but I don’t. It pushes out the possibility of writing in another medium or venue.

When asked how they write, writers often say that they just sit down and write for a certain amount of time, at a certain time. They seem to be citing a particular discipline that creates the right environment for writing. This notion also fosters the idea that writing can be a skill that is developed with regular practice. I’m going to put that idea to the test.

I work at home, and as a developer of webby things I spend a lot of time sitting in my chair in front of a bank of computers. This is not a recipe for a healthy lifestyle. And I’ll be turning 50 in less than a month. I’m not a complete couch potato though. I walk a 3 mile circuit regularly, alternating with exercise program that works my upper body. I’ve made time in my day to get this exercise in and it really gets my day off to a good start. Now I’m thinking that I’ll expand this exercise program to include a daily dose of writing.

The plan is to spend 30 minutes a day writing, probably in the morning while I have a cup of coffee and before exercising. The goal is to regularly kick out 500 – 600 words during the 30 minutes. As far as topics go, I’m not too sure of that yet. Probably some mix of tech stuff and free form verse, I’ll just have to see where this goes.