The Real Alternative


How computers broke science – and what we can do to fix it

Photo – US Army (click for more historic computer images)

Ben Marwick, University of Washington

Reproducibility is one of the cornerstones of science. Made popular by British scientist Robert Boyle in the 1660s, the idea is that a discovery should be reproducible before being accepted as scientific knowledge.

In essence, you should be able to produce the same results I did if you follow the method I describe when announcing my discovery in a scholarly publication. For example, if researchers can reproduce the effectiveness of a new drug at treating a disease, that’s a good sign it could work for all sufferers of the disease. If not, we’re left wondering what accident or mistake produced the original favorable result, and would doubt the drug’s usefulness.

For most of the history of science, researchers have reported their methods in a way that enabled independent reproduction of their results. But, since the introduction of the personal computer – and the point-and-click software programs that have evolved to make it more user-friendly – reproducibility of much research has become questionable, if not impossible. Too much of the research process is now shrouded by the opaque use of computers that many researchers have come to depend on. This makes it almost impossible for an outsider to recreate their results.

Recently, several groups have proposed similar solutions to this problem. Together they would break scientific data out of the black box of unrecorded computer manipulations so independent readers can again critically assess and reproduce results. Researchers, the public, and science itself would benefit.

Computers wrangle the data, but also obscure it

Statistician Victoria Stodden has described the unique place personal computers hold in the history of science. They’re not just an instrument – like a telescope or microscope – that enables new research. The computer is revolutionary in a different way; it’s a tiny factory for producing all kinds of new “scopes” to see new patterns in scientific data.

It’s hard to find a modern researcher who works without a computer, even in fields that aren’t intensely quantitative. Ecologists use computers to simulate the effect of disasters on animal populations. Biologists use computers to search massive amounts of DNA data. Astronomers use computers to control vast arrays of telescopes, and then process the collected data. Oceanographers use computers to combine data from satellites, ships and buoys to predict global climates. Social scientists use computers to discover and predict the effects of policy or to analyze interview transcripts. Computers help researchers in almost every discipline identify what’s interesting within their data.

Computers also tend to be personal instruments. We typically have exclusive use of our own, and the files and folders it contains are generally considered a private space, hidden from public view. Preparing data, analyzing it, visualizing the results – these are tasks done on the computer, in private. Only at the very end of the pipeline comes a publicly visible journal article summarizing all the private tasks.

The problem is that most modern science is so complicated, and most journal articles so brief, it’s impossible for the article to include details of many important methods and decisions made by the researcher as he analyzed his data on his computer. How, then, can another researcher judge the reliability of the results, or reproduce the analysis?

Good luck recreating the analysis.
US Army

How much transparency do scientists owe?

Stanford statisticians Jonathan Buckheit and David Donoho described this issue as early as 1995, when the personal computer was still a fairly new idea.

An article about computational science in a scientific publication is not the scholarship itself, it is merely advertising of the scholarship. The actual scholarship is the complete software development environment and the complete set of instructions which generated the figures.

They make a radical claim. It means all those private files on our personal computers, and the private analysis tasks we do as we work toward preparing for publication should be made public along with the journal article.

This would be a huge change in the way scientists work. We’d need to prepare from the start for everything we do on the computer to eventually be made available for others to see. For many researchers, that’s an overwhelming thought. Victoria Stodden has found the biggest objection to sharing files is the time it takes to prepare them by writing documentation and cleaning them up. The second biggest concern is the risk of not receiving credit for the files if someone else uses them.

A new toolbox to enhance reproducibility

What secrets are within the computer?
US Army

Recently, several different groups of scientists have converged on recommendations for tools and methods to make it easier to keep track of files and analyses done on computers. These groups include biologists, ecologists, nuclear engineers, neuroscientists, economists and political scientists. Manifesto-like papers lay out their recommendations. When researchers from such different fields converge on a common course of action, it’s a sign a major watershed in doing science might be under way.

One major recommendation: minimize and replace point-and-click procedures during data analysis as much as possible by using scripts that contain instructions for the computer to carry out. This solves the problem of recording ephemeral mouse movements that leave few traces, are difficult to communicate to other people, and hard to automate. They’re common during data cleaning and organizing tasks using a spreadsheet program like Microsoft Excel. A script, on the other hand, contains unambiguous instructions that can be read by its author far into the future (when the specific details have been forgotten) and by other researchers. It can also be included within a journal article, since they aren’t big files. And scripts can easily be adapted to automate research tasks, saving time and reducing the potential for human error.

We can see examples of this in microbiology, ecology, political science and archaeology. Instead of mousing around menus and buttons, manually editing cells in a spreadsheet and dragging files between several different software programs to obtain results, these researchers wrote scripts. Their scripts automate the movement of files, the cleaning of the data, the statistical analysis, and the creation of graphs, figures and tables. This saves a lot of time when checking the analysis and redoing it to explore different options. And by looking at the code in the script file, which becomes part of the publication, anyone can see the exact steps that produced the published results.

Other recommendations include the use of common, nonproprietary file formats for storing files (such as CSV, or comma separated variables, for tables of data) and simple rubrics for systematically organizing files into folders to make it easy for others to understand how the information is structured. They recommend free software that is available for all computer systems (eg. Windows, Mac, and Linux) for analyzing and visualizing data (such as R and Python). For collaboration, they recommend a free program called Git, that helps to track changes when many people are editing the same document.

Currently, these are the tools and methods of the avant-garde, and many midcareer and senior researchers have only a vague awareness of them. But many undergraduates are learning them now. Many graduate students, seeing personal advantages to getting organized, using open formats, free software and streamlined collaboration, are seeking out training and tools from volunteer organizations such as Software Carpentry, Data Carpentry and rOpenSci to fill the gaps in their formal training. My university recently created an eScience Institute, where we help researchers adopt these recommendations. Our institute is part of a bigger movement that includes similar institutes at Berkeley and New York University.

As students learning these skills graduate and progress into positions of influence, we’ll see these standards become the new normal in science. Scholarly journals will require code and data files to accompany publications. Funding agencies will require they be placed in publicly accessible online repositories.

Example of a script used to analyze data.
Author provided

Open formats and free software are a win/win

This change in the way researchers use computers will be beneficial for public engagement with science. As researchers become more comfortable sharing more of their files and methods, members of the public will have much better access to scientific research. For example, a high school teacher will be able to show students raw data from a recently published discovery and walk the students through the main parts of the analysis, because all of these files will be available with the journal article.

Similarly, as researchers increasingly use free software, members of the public will be able to use the same software to remix and extend results published in journal articles. Currently many researchers use expensive commercial software programs, the cost of which makes them inaccessible to people outside of universities or large corporations.

Of course, the personal computer is not the sole cause of problems with reproducibility in science. Poor experimental design, inappropriate statistical methods, a highly competitive research environment and the high value placed on novelty and publication in high-profile journals are all to blame.

What’s unique about the role of the computer is that we have a solution to the problem. We have clear recommendations for mature tools and well-tested methods borrowed from computer science research to improve the reproducibility of research done by any kind of scientist on a computer. With a small investment of time to learn these tools, we can help restore this cornerstone of science.

The Conversation

Ben Marwick, Associate Professor of Archaeology, University of Washington

This article was originally published on The Conversation. Read the original article.

1 Comment

In our Wi-Fi world, the internet still depends on undersea cables

English: A TOSLINK fiber optic cable with a cl...

A TOSLINK fiber optic cable with a clear jacket that has a laser being shone onto one end of the cable. The laser is being shone into the left connector; the light coming out the right connector is from the same laser. (Photo credit: Wikipedia)

Nicole Starosielski, New York University

Recently a New York Times article on Russian submarine activity near undersea communications cables dredged up Cold War politics and generated widespread recognition of the submerged systems we all depend upon.

Not many people realize that undersea cables transport nearly 100% of transoceanic data traffic. These lines are laid on the very bottom of the ocean floor. They’re about as thick as a garden hose and carry the world’s internet, phone calls and even TV transmissions between continents at the speed of light. A single cable can carry tens of terabits of information per second.

While researching my book The Undersea Network, I realized that the cables we all rely on to send everything from email to banking information across the seas remain largely unregulated and undefended. Although they are laid by only a few companies (including the American company SubCom and the French company Alcatel-Lucent) and often funneled along narrow paths, the ocean’s vastness has often provided them protection.

2015 map of 278 in-service and 21 planned undersea cables.

Far from wireless

The fact that we route internet traffic through the ocean – amidst deep sea creatures and hydrothermal vents – runs counter to most people’s imaginings of the internet. Didn’t we develop satellites and Wi-Fi to transmit signals through the air? Haven’t we moved to the cloud? Undersea cable systems sound like a thing of the past.

The reality is that the cloud is actually under the ocean. Even though they might seem behind the times, fiber-optic cables are actually state-of-the-art global communications technologies. Since they use light to encode information and remain unfettered by weather, cables carry data faster and cheaper than satellites. They crisscross the continents too – a message from New York to California also travels by fiber-optic cable. These systems are not going to be replaced by aerial communications anytime soon.

A tangled cable caught by fishermen in New Zealand.

A vulnerable system?

The biggest problem with cable systems is not technological – it’s human. Because they run underground, underwater and between telephone poles, cable systems populate the same spaces we do. As a result, we accidentally break them all the time. Local construction projects dig up terrestrial lines. Boaters drop anchors on cables. And submarines can pinpoint systems under the sea.

Most of the recent media coverage has been dominated by the question of vulnerability. Are global communications networks really at risk of disruption? What would happen if these cables were cut? Do we need to worry about the threat of sabotage from Russian subs or terrorist agents?

The answer to this is not black and white. Any individual cable is always at risk, but likely far more so from boaters and fishermen than any saboteur. Over history, the single largest cause of disruption has been people unintentionally dropping anchors and nets. The International Cable Protection Committee has been working for years to prevent such breaks.

An undersea cable lands in Fiji.
Nicole Starosielski, CC BY-ND

As a result, cables today are covered in steel armor and buried beneath the seafloor at their shore-ends, where the human threat is most concentrated. This provides some level of protection. In the deep sea, the ocean’s inaccessibility largely safeguards cables – they need only to be covered with a thin polyethelene sheath. It’s not that it’s much more difficult to sever cables in the deep ocean, it’s just that the primary forms of interference are less likely to happen. The sea is so big and the cables are so narrow, the probability isn’t that high that you’d run across one.

Sabotage has actually been rare in the history of undersea cables. There are certainly occurrences (though none recently), but these are disproportionately publicized. The World War I German raid of the Fanning Island cable station in the Pacific Ocean gets a lot of attention. And there was speculation about sabotage in the cable disruptions outside Alexandria, Egypt in 2008, which cut 70% of the country’s internet, affecting millions. Yet we hear little about the regular faults that occur, on average, about 200 times each year.

Redundancy provides some protection

The fact is it’s incredibly difficult to monitor these lines. Cable companies have been trying to do so for more than a century, since the first telegraph lines were laid in the 1800s. But the ocean is too vast and the lines simply too long. It would be impossible to stop every vessel that came anywhere near critical communications cables. We’d need to create extremely long, “no-go” zones across the ocean, which itself would profoundly disrupt the economy.

Fewer than 300 cable systems transport almost all transoceanic traffic around the world. And these often run through narrow pressure points where small disruptions can have massive impacts. Since each cable can carry an extraordinary amount of information, it’s not uncommon for an entire country to rely on only a handful of systems. In many places, it would take only a few cable cuts to take out large swathes of the internet. If the right cables were disrupted at the right time, it could disrupt global internet traffic for weeks or even months.

The thing that protects global information traffic is the fact that there’s some redundancy built into the system. Since there is more cable capacity than there is traffic, when there is a break, information is automatically rerouted along other cables. Because there are many systems linking to the United States, and a lot of internet infrastructure is located here, a single cable outage is unlikely to cause any noticeable effect for Americans. is an interactive platform developed by Erik Loyer and the author that lets users navigate the transpacific cable network.

Any single cable line has been and will continue to be susceptible to disruption. And the only way around this is to build a more diverse system. But as things are, even though individual companies each look out for their own network, there is no economic incentive or supervisory body to ensure the global system as a whole is resilient. If there’s a vulnerability to worry about, this is it.

The Conversation

Nicole Starosielski, Assistant Professor of Media, Culture and Communication, New York University

This article was originally published on The Conversation. Read the original article.

1 Comment

A new generation of weird-looking space suits will take us to Mars

Image – NASA

David Andrew Green, King’s College London and Matteo Stoppa, King’s College London

When Russian cosmonaut Alexei Leonov conducted the world’s first space walk in 1965, the mission nearly ended in catastrophe. After 12 minutes outside the Voskhod spacecraft, the vacuum of space had caused Leonov’s suit to inflate so much he couldn’t get through the air lock. He was forced to manually vent oxygen from inside the suit to reduce its size and get back onto the ship before the effects of decompression sickness overcame him.

Amazingly, the design of many of the space suits in use today hasn’t changed that much. The Russians still use a variant of Leonov’s one-size-fits-all suit, the Orlan M, and the Chinese use the visibly similar Feitian. And while NASA’s Extravehicular Mobility Unit (EMU) has been updated since its initial development in the 1980s, its primary life support system dates to the Apollo missions of the 1960s.

However, the advent of manned flights to Mars and advances in materials technology could change all this. For space tourism to take off and mankind to step on Mars, we need suits that may look very different to those used today. Engineers are now developing a new generation of space suits that could help astronauts withstand longer periods of time in space and deal with the hazards of exploring other planets.

Future template?
20th Century Fox

Mini spacecraft

Most space suits are essentially mini spacecraft. Although typically just a few millimetres thick, the suits have to provide life support and protection against the vacuum, temperature extremes and micrometeorites of space. Without this protection, the drop in pressure would cause the body to swell up and lethal bubbles of nitrogen gas to form in the blood.

Suits that maintain a lower pressure, such as the 4.3 pounds per square inch (psi) of NASA’s EMU, make it much easier to move and so are less tiring. This makes a huge difference when spacewalks can last up to eight hours. The downside is this also increases the time an astronaut needs to spend breathing pure oxygen to reduce the risk of gas bubbles forming in the blood.

For its Mars suit, however, NASA is looking at much higher pressure designs such as the soft Z-2 and the hard-and-soft hybrid Mark III. These would effectively “dock” into the spacecraft or Mars base building, allowing the astronaut to enter but leaving the suits – and the irritating and potentially toxic Martian dust – outside.

A completely different approach would be to replace suits that pressurise the gas around the body with tight-fitting, stretchy garments that provide mechanical counter-pressure. This idea was first proposed in the 1970s but has only recently become possible with the creation of suitable materials. One example is the “BioSuit” developed at the Massachussets Institute of Technology (MIT), which uses nickel-titanium shape-memory alloys to form a “second skin”.

Such a suit would also be much lighter than the 130kg of the EMU. It could also increase resilience, as minor rips or tears would be less likely to cause immediate fatal depressurisation. But this kind of suit will still need a space helmet to deliver breathable gas to the astronaut. Interestingly, the BioSuit is rumoured to form the basis for the suit being developed by Elon Musk’s company SpaceX for its astronauts to wear inside its Dragon capsule.

In a spin: the SkinSuit.

Under pressure

Astronauts have always worn full-pressure suits during landing and takeoff and once they’re safely in space, for example on board the International Space Station, they can wear shorts and t-shirts. But since the 1970s, the Russians have recommended donning the Pengvin (Penguin) suit in an attempt to prevent the loss of muscle and bone and the spinal stretching that occur when astronauts spend a time in zero-gravity environments. This can increase their height by as much as 7cm, preventing them from fitting into their space suits or moulded seats in the Soyuz transport vehicles that at the moment are the only way back to Earth.

The Pengvin suit comprises a belt with bungee cords wrapped around the shoulders and feet. This compresses the body in a way that loads it with the equivalent of 40kg of weight in order to simulate gravity. The problem is we do not experience gravity on Earth as a weight on our shoulders, and so astronauts usually choose not to wear the suit because it is very uncomfortable.

To overcome this, we have worked with the European Space Agency and international colleagues to create another body-tight suit that creates resistance at each point around the body that is proportional to that of real gravity. This means the full force of the combined “weight” is only felt at the feet, making the suit feel much more natural and comfortable to wear. Our research has shown that this “Gravity-Loading SkinSuit” can significantly reduce spine lengthening in a weightless environment, with a force less than 30% of Earth’s gravity.

ISS astronaut Andreas Mogensen wore the SkinSuit during his mission to the International Space Station in September 2015 but we have yet to find out if he has found it tolerable and whether it reduced any back pain and spine lengthening. Ultimately though, we hope these suits will reduce the risk of back injury due to intervertebral disc prolapse (slipped disc) when the astronauts land – something that would be catastrophic for a mission to Mars.

The Conversation

David Andrew Green, Senior Lecturer of Human & Aerospace Physiology , King’s College London and Matteo Stoppa, PhD candidate in Electronic Devices, King’s College London

This article was originally published on The Conversation. Read the original article.

1 Comment

Biotechnology in Renewable Energy Resources

By Edward Hunter

Alternative energy is defined as energy that comes from a natural source and is renewable or naturally occurring. Alternative energy typically does not produce pollution and comes from such sources as the sun, wind, and water.

There have been many recent innovations to alternative energy as a result of expanding alternative energy technologies.

Alternative energy technologies have made it possible to do much more research on how to use the alternative energy sources that we have more effectively to generate the most power out of the various sources.

Alternative energy technologies have also been instrumental in discovering new ways to produce heating fuels, such as bio-diesel, methanol, and ethanol from bio-mass for energy consumption.

It is imperative that companies who are dedicated to alternative energy options continue to develop their alternative energy technologies to keep up with the demand to make alternative energy more readily accessible to a larger amount of people as the dwindling supply of fossil fuels and concerns over our dependence on foreign oils drives many more people towards considering alternative energy sources.

In recent years alternative energy technologies have propelled bio-mass and bio-deisel into the fore front of the alternative energy movement. Biotechnology has become an extremely important area of research and development as a result of the record high gas and heating fuel prices.

Biomass is organic material made from plants or animals that originates from agricultural and forestry residue as well as municipal and industrial wastes and terrestrial and aquatic crops. Through the use of alternative energy technologies bio-mass has been able to be transferred into usable fules such as methane, ethanol, dio-deisel, methanol and biocrude.

These products are viable and readily available alternatives to pertroleum and gasoline. Through the use of alternative energy technologies bio-mass has also been found to be a source of biopower. Biopower uses biomass to produce electricity using alternative energy technologies such as direct firing, co-firing, gasification, pyrolysis, and anaerobic digestion.

The direct firing method biomass is burned to produce steam. The steam drives a turbine that turns a generator to convert power into electricity. Without alternative energy technologies it would be much more difficult to develop new ways to use the resources that are available naturally.

Alternative energy technologies also make it possible to discover new ways to develop alternative energy and make it more user friendly and efficient in usage and installation.

Alternative energy technologies truly are changing the face of the alternative energy movement and creating more innovative ways to use natural resources as well as providing new products that rely on alternative energy as their source of power.

About the Author:

Learn more about alternetive energy at

Leave a comment

That Offends Me!


Freeware and Freebies

Image via Flickr

I tend to update this page whenever I find something really good. Today’s find is a video converter that makes it pretty easy to watch videos across several devices. Just be careful when installing if you don’t want any of the extra offers that come with it. A clean install is possible. You just have to pay attention when going through the wizard. And if you don’t know what to do when faced with multiple options, ask someone before guessing.

There are a host of other free products by this company that I recommend trying. I especially like the jukebox that searches and organizes YouTube videos. However, same word of caution as above. Unwanted extra programs can be a drag. But they can be avoided.

Well, that’s about it for now. Happy computing! :-)

And remember, is not affiliated with any the companies, organizations or projects mentioned in this blog entry.

Media Players

  • VideoLAN – VLC media player Ever wanted to capture a still image from a video? Most free media players won’t do it. After searching the web and reading all sorts of complicated do’s and don’ts, I stumbled upon this free program, which does it effortlessly. VLC also formats DVD playback in a variety of aspect ratios, which can be nice. And, it plays FLAC audio files.

Video Search

  • Freemake – Jukebox that searches and organizes YouTube videos.
  • Blinkx Not really freeware because there’s nothing to download. But it’s free and a good alternative to YouTube and Google video searches.

Video Conversion

  • Freemake – Awesome and fast
  • Handbrake – Does some formats that Freemake can’t. But can be slow.
  • Bink Video (RAD Video Tools) Converts digital video files into different formats. Especially useful if your digital camera writes Quicktime .MOV files. Bink/RAD will convert them into .AVI files, which Windows Movie Maker can import!

Image Editing/Digital Painting

  • Krita – Someone just recently tipped me off about this. I don’t see any text function. But it might be in there somewhere. Digital artists should give this a try. I can’t draw my way out of a wet paper bag. So this one isn’t too useful for me.
  • Pixlr – This has three versions, each different. I like it way better than Instagram.
  • PhotoFiltre One of my favorite free photo editors with plug-ins, highlighting and “fade last effect” feature, much like Photoshop version 4. PF doesn’t handle multiple layers like the GIMP but it’s light and tasteful. Don’t confuse this with PhotoFiltre Studio, which is not freeware.
  • PhotoScape This is a fantastic program with some great filters, fun photo stuff and useful text effects. I use this to rotate/level photos as I find it’s faster, easier and does a better job than anything else I’ve tried.
  • The GIMP GIMP stands for “GNU Image Manipulation Program.” The GIMP just keeps getting better and better; features include text, drop shadow, bevels, layers, color replacement and lots of fine filters.
  • Some cool filters for the Gimp. While Photoshop 8bf filters may still be the industry standard, I find that using freeware opens me up to different graphics and artistic approaches that I’d otherwise never try. You don’t have to be a rocket scientist to install these filters. Just read the instructions and enjoy!
  • Virtual Photographer This is a great program for enhancing photos, compatible with the GIMP and other commercial software.
  • Picasa IMHO the strongest thing about this photo editor is the excellent color, lightness and contrast fixing. And it’s very user friendly. Photo rotation is a bit blurry. I use PhotoScape for that.
  • Photo Pos Pro Visually nice to look at, has some good effects and handles layers.
  • Photobie I don’t use this one too much but it has some good filters and is under steady development. Like anything else, software preference is a pretty personal thing. Definitely worth a try.
  • LightBox Solid performer. Free version touches up pics nicely with minimum of effort.
  • UnFREEz Creates animated gifs almost effortlessly, preserves transparency, and does a much better job (in terms of image quality) than MS gif animator.
  • Easy Thumbnails This easily creates good, sharp thumbnails.
  • Vector Magic Not free but you can evaluate for free with saving disabled.
  • Inkscape Good for making banners, working with fonts and converting bitmap to vector graphics.

AntiVirus, Junk and Spyware Removal

  • AntiVir A nice antivirus program from Germany with frequent free updates.
  • AdAware A ‘too good to be true’ program for detecting and cleaning invasive ads and malware that can slow down your computer. With free updates and lots of options.
  • Advanced SystemCare This was recommended by a visitor and it seems very powerful. But some may find it too aggressive and Gizmo’s Freeware says some have reported errors after using. I’ve tested this out and so far have had no probs with WinXP. It gets stuff CCleaner doesn’t, and vice versa.
  • CCleaner Fantastic program for cleaning junk files from your hard drive with frequent updates. Also useful for fixing registry integrity and blocking unwanted Windows startup programs. Use with extreme caution and don’t even think about going past the default settings unless you know what you’re doing!
  • Glary Utilities Recommended by a visitor; still testing…
  • Malwarebytes This is handy if by chance the other stuff listed here can’t help you.
  • Panda Cloud Antivirus Antivirus is available in cloud format, so say goodbye to those irritating virus definition updates.
  • Revo Uninstaller Uninstalling programs with Windows uninstaller can be like having a traveling salesman or woman leave muddy footprints on your carpet. Meaning… all sorts of junk remains in your system. Revo seems to do a very good job at overcoming that. Scans deep to get the junk that normally is left behind.


  • FileZilla FTP freeware. This is another “too good to be true” program with frequent updates. It just seems to be getting better and better.

Making Web Pages

  • Free Gifs and Animations Lots of good stuff.
  • KomPozer Apparently some techies didn’t like the fact that the buggy but very promising Nvu went into stasis. So they continued where Nvu left off. Great job! From my preliminary test it seems this might be the best totally free WYSIWYG editor around.
  • Amaya A free WYSIWYG html editor. It’s a good, straightforward product that would probably fit the needs of basic to intermediate users. Also has some cool special characters.
  • Evrsoft First Page is a free WYSIWYG editor (with a 5 sec. nag screen). It has advanced features but, as others have said, the last version I tested was a touch slow and, on my computer, a bit buggy. Still, I’ve used it with great results. (And it might have been updated since I wrote this particular entry in May 2008).

Making Music / Audio Production

  • Kristal Audio Engine This is a great program for sound recording in a multi-track format. It’s like a software version of the old Fostex and Tascam cassette recorders. Handles up to 16 audio tracks with effects, copy/cut and paste, bouncing and room for expansion. Although Kristal has been criticized for tracks not being in sync, spending a bit of time at the friendly user forum solved the issue for me.
  • Audacity – This is THE program for freeware sound recording. Check it out.
  • Reaper Reaper isn’t free but is a 60 day demo. After that, a nagscreen reminds you that it’s not free. But it continues uncrippled because the developers believe that crippling their demo is not the best way to go. This is a great program for music producers if you are willing to look elsewhere for VST plugins (like KVR, Vst4Free or the very helpful Bedroom Producer’s Blog).
  • FL Studio Somewhat like Reaper, FL Studio isn’t free but some features continue to work in the demo version. The cool guitar plugin Slayer, for instance, seems to work without limitation in the free demo version. Other plugins cut in and out. Last I heard, Avicii uses FLS. So it’s gotta have something going for it!
  • LMMS This seems really promising. It used to only work on Linux (which is beyond me). But it’s now Windows-friendly. LMMS is mostly about midi, but you can import recorded audio files as samples. So vocalists might want to try Audacity first, or something like that. This program is fairly basic but has its own charm. I did a really quick, silly thing (posted here) while learning it. I never got much further than that!
  • Asio4All So you’re new to audio production and your tracks are out of sync, or there’s way too much delay between hitting your MIDI keyboard and hearing a sound (called “latency”). Enter Asio4All. The genuine Asio driver is made, I believe, by Steinberg and is copyright material. But many people seem to use Asio4All, which I guess is some kind of approximation of the real thing. Perhaps it’s like generic drugs vs. name brands. It comes bundled with the FL Studio demo and is at CNET, so it’s got to be okay.
  • Synthmaster Player I mention this by itself because it really stands out. It’s free, uncrippled, and great. You may not like my freaky music or limited ability. But I used this synth for the bubbly “Berlin Bass” in the tune On a Star.
  • VST Resources There are a lot of really good sites out there telling about great free VST plugins. If you really want to find them all, try Google.  But the three sites I use most are KVR, Vst4Free, and Bedroom Producer’s Blog. BPB narrows down many plugins to his favorites. And I usually agree with his point of view. He’s also open to new suggestions. So it’s a “must visit” site.

Music Listening / Audio Conversion

  • Songza There are lots of services out there. This one is my favorite. I like it so much, I made several playlists for all to enjoy! (shameless plug) ;-)
  • iTunes You don’t have to purchase media with this software. iTunes comes with fantastic, free streaming radio and a 10-band equalizer and preamp with great presets. Travel the world through talk and music!
  • Winamp Music and video player with a 10-band equalizer and preamp to make music come alive. I don’t know what’s going on with Winamp these days. But I used to like it.
  • Live No download streaming radio portal. Impressive selection of genres.
  • RadioTime Provides links to many streaming radio stations.
  • AudioGrabber Handles WAV and MP3 formats. Audiophiles will probably know that WAV files sound better but are huge. MP3’s are “sonically acceptable” and take up less space for iPods, etc. There are several free grabbers out there but I find this one sounds bigger and fatter than the others I’ve tested. Some audiophiles may like that, others may not.
  • Xrecode This is great for converting to FLAC (a “lossless” format that sounds just as good as WAV with about 45% smaller file size) and many other formats, including Mp3.
  • Freemake – Nice user interface but flac to mp3 conversion test took 5 to 6 times longer than xrecode (which was listed here way before it caught on at CNET, etc).

Create RSS Feeds

  • FeedSpring Web publishers can use this to generate their own RSS feeds.

Get News with RSS

  • RSS Reader Get news stories from all over the web. This is a super program. But a while back I tested a beta version requiring net framework 2.0 and wasn’t impressed. About a third of my RSS feeds didn’t work. So I reverted to version with net framework 1.1 and everything works great.
  • Feedreader Google Reader is no more. I never liked it much anyhow. Doing RSS online is too slow for me. But here’s a program that I use sometimes. It has a good “lookup” feature for specialized articles.

Bandwidth Monitoring

  • FreeMeter Since I’m a regular web cam user, I wanted to know which web cam software is most efficient. Enter FreeMeter.

Scanning, File Conversion, PDF

  • Bullzip This easily converts Windows documents to pdf. Lots of options. Fantastic.
  • Scan2PDF – scan documents to PDF format Scan anything and convert to .pdf (for Acrobat Reader). Also open image files from your hard drive and convert to .pdf. I found that it works best if in “options” you enable the scanner interface to be seen. That way you can adjust the resolution and get really good results.
  • Open Office I tested out the word processor on this suite in 2008 and found it satisfactory, although the English thesaurus was weak, and downloading/installing more dictionaries was a hassle. It was also a bit slow to load and felt heavier on my machine than commercial products. Open Office easily converts to pdf, however, and supports a wide number of languages. And I believe there’s a more recent version.
  • Primo PDF Primo converts Windows documents to pdf.


  • Always on Top I use this with WinXP to keep an application window visible while working with other applications. Examples could be keeping MS Word or maybe a Google chat contact visible while surfing or blogging. This program is very light and works great.

iOS Related

  • Syncios Transfers media files (including video) directly from PC to iPad and other iOS devices—without having to jailbreak. I couldn’t get it to work on Windows 7 64 at first. But after following the help steps and my own good sense, it works fine. The joy of this is that you can transfer media to your iOS device without having to waste bandwidth (and time) thru iTunes (which was a hassle to instal on Windows 7 64) or dropbox.

The software and online content mentioned in this post may be incompatible with your hardware and/or software. By clicking on any of the links mentioned in this post, you agree that | is not liable for any damages that may be incurred from visiting these links or downloading the software.


Leave a comment

Holiday update

Some children looking at a selection of Christ...

Some children looking at a selection of Christmas Cards during the 1910 holiday season. (Photo credit: Wikipedia)

Happy Holidays everyone! I’d just like to highlight the most recent change in Earthpages’ Terms of Use. In a nutshell, I’ve enabled reblogging.

After testing out reblogging at, it seems I’ve been way too cautious in not enabling it. The benefits are pretty clear to see – More traffic, more user interaction. Exactly what Earthpages is about!

A good example of what a reblog looks like can be seen here:

Generally, about two or three paragraphs are reproduced, with a link back to the entire article.

I truly hope that everyone who’s been published at Earthpages will be okay with this. If not, you can always get your articles removed. I’ll understand. But I still feel that the benefits of reblogging will far outweigh any potential issues.


Michael Clark, Christmas Day 2014


Get every new post delivered to your Inbox.

Join 867 other followers