Samuel Mullen What could possibly go wrong?

Getting More Out of the Rails Console

by Samuel Mullen

Posted on Jul 11, 2012


Even at it’s most basic functionality, the console is indispensable in the Rails developer’s arsenal of tools. Whether it’s running a query, testing a chain of methods, or executing small blocks of code, the console may be the most useful tool available to the Rails developer. With that in mind, doesn’t it make sense to do what we can to get the most out of it?

IRB

.irbrc

Under the hood, the Rails Console is just IRB (Interactive Ruby), so anything you can do with IRB, you can do in the console. This means you can modify your IRB environment and .irbrc file to define methods to use at the console. Here are three methods I frequently use:

\# return a sorted list of methods minus those which are inherited from Object
class Object
  def interesting_methods
    (self.methods - Object.instance_methods).sort
  end
end

\# return an Array of random numbers
class Array
  def self.test_list(x=10)
    Array(1..x)
  end
end

\# return a Hash of symbols to random numbers
class Hash
  def self.test_list
    Array(:a..:z).each_with_object({}) {|x,h| h[x] = rand(100) }
  end
end

Of course this isn’t even scratching the surface of what’s possible. Check out what other people are doing:

The Last Expression

While working in the console, have you ever typed out a bit of code to return a value and then realize you forgot to assign the returned value to a variable? You then have to go back into the console history, move your cursor to the beginning of the line, add the variable, and then execute the code again.

Ugh, what a pain!

Unbeknownst to most people, IRB places the output of the last command into the _ variable. Here, let me show you:

 > Array(1..10)
 => [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
 > _
 => [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

Now, as cool as that is, you have to understand that _ always contains the output of the last expression. This means if you try call a method on it, it will then contain the output of the method executed.

 > Array(1..10)
 => [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
 > _
 => [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
 >  _.map {|i| i * 2}
 => [2, 4, 6, 8, 10, 12, 14, 16, 18, 20]
 > _
 => [2, 4, 6, 8, 10, 12, 14, 16, 18, 20]

Forking IRB

When working in the console, it’s sometimes desirable to have another instance to play with. It may be that you don’t want to lose what you were working with, or you just need another scratch area, but whatever the case, you can create a new console (IRB) session by calling irb at the prompt (note: you’ll use irb for the rails console as well).

I typically don’t use this. If I need another IRB instance, I just open a new tmux pane or window and work there.

If this sort of method fits your workflow, I highly recommend reading Gabriel Horner’s in depth post on IRB commands

The Rails Console

Models

One of the things you will want to make extensive use of in the console are your app’s models. The Rails console is a great way to play with your models and an alternative way of accessing your data.

 > u = User.find 1234; nil
=> nil

 > u.name
 => "Foo Man"

 > u.email = "fooman@example.com"
 => "fooman@example.com"

 > u.save
 => true

The “app” object

The app object is used by test processes to mimic system interactions. Through this object, we can access routing information and even make requests to our app.

# displaying path information
 > app.users_path
=> "/users/

 > app.user_path(User.last)
=> "/users/42

# making app requests

 > app.get app.user_path(User.last)
=> 200

 > app.response.body
=>  => "<!DOCTYPE html>\n<html>\n  <head>\n    <title>..."

The “helper” Object

I really don’t think I can do a better job on this than what Nick Quaranto already did in his “Three Quick Rails console tips” post.

Reloading the Environment

If you make changes to your app while still in the console, you will need to reload your console session with the reload! command. You will also need to reinstantiate objects which existed prior to the reload for them to recognize your changes.

 > u = User.find 1234; nil

\# changes made to User model outside the console 

 > reload!
Reloading...
 => true
 > u = User.find 1234; nil

Better Output

At one time it seemed like everyone was writing a gem for improving the IRB experience, but it appears like that particular endeavor has since been largely ignored. The one project that appears to be currently active is the awesome_print gem.

I’ve used this gem in the past, and it really does improve the output and IRB experience. It also supports pry.

In a pinch, you can format the output as YAML with the y command.

 > y Array.test_list(5)
---
- 1
- 2
- 3
- 4
- 5
 => nil

Avoiding Output

Running commands in the console can get pretty noisy. Output follows every command which is run. To get around this, just end your command with a semicolon and nil.

 > u = User.last; nil
=> nil

In the above example, u still contains the “last” user record, it just doesn’t print out all the output that would normally be produced.

The Sandbox

Sometimes it would be nice to open up a console session and mess around with the data to see what happens. But if you do that, the data’s messed up. The solution to that is to lunch the console with the --sandbox flag. When launched, you can handle the data, tweak it, and destroy it, all without fear of harming any of your data.

rails console --sandbox

Loading development environment in sandbox (Rails 3.2.1)
Any modifications you make will be rolled back on exit
 >

Conclusion

In my workflow, the rails console is an indispensible tool. It allows me to test out ideas, access the database, run minor tasks, and even calculate basic math. I can’t imagine developing Rails applications without it, because I know how painful it is to be deprived of such a tool in other languages and frameworks.

What are your favorite IRB and console tricks?

Read More

Up and Running With Ruby Interactive (ri)

by Samuel Mullen

Posted on Jan 30, 2012


Unbenownst to many Ruby developers, Ruby has a wonderful CUI (Composite User Interface) for referencing the API of the language and all available libraries: ri. ri stands for “Ruby Interactive”, which is an odd name for a tool to navigate documentation, and one which inevitably gets confused with irb (Interactive RuBy). Seriously, try Googling “Ruby Interactive”.

Perhaps it is due to the popularity of editors such as TextMate or Sublime, or perhaps because it is the ease of finding documentation through Google, but it seems that most developers either don’t know about, or don’t use ri. I do, but I’m a command-line junkie and I’ve not found another tool which is as simple or as accessible as plain old ri.

Getting Started

If you have Ruby installed, you should have all that’s necessary to play around with ri. If you are have troubles, look at “Troubleshooting” below.

To try it out, at the command-line type ri followed by the class or method you want documentation about. Typing ri Array should show you the docs for the Array class. If you type ri Array.min, ri will provide the documentation for Array’s min instance method. To be very specific about things, use a :: or # for class and instance methods rather than the generic ..

Example: bash ri Array::wrap # class method lookup ri Array#min # instance method lookup

One really useful feature of the tool is the suggestions ri provides when you enter only part of what you are looking for. Try it out:

$ ri Enumerable.each
Enumerable.each not found, maybe you meant:

Enumerable#each_cons
Enumerable#each_entry
Enumerable#each_slice
Enumerable#each_with_index
Enumerable#each_with_object

Neat. I didn’t know about #each_with_object.

In Technicolor

You may have noticed that all the documentation is in black and white - or at least you would have if you were using my color scheme. If you want to add a little color to your output, use the “ansi” format.

ri -f ansi Array.sort

If you like this, you can set up an alias in your .bashrc or .zshrc.

alias ri='ri -f ansi'

Make sure not to use a direct path to ri since RVM projects will have ri located somewhere else. You want to use your project’s instance of ri.

##It’s All About the Interactive It might surprise you to learn that “Ruby Interactive” actually has an “interactive” mode. You can start ri in interactive mode by passing the -i flag.

here’s what that looks like: ``` bash $ ri -i

Enter the method name you want to look up. You can use tab to autocomplete. Enter a blank line to exit.

```

At the prompt, just type in what methods, classes, or what-have-you and hit return. To exit, just hit return again, or the ever available ctrl-d.

Unlike calling ri with a specific parameter, interactive mode will, like an inquisitive child, keep prompting you to search again.

VIM and ri

It’s not always convenient to drop down to the command-line just to look up documentation. Wouldn’t it be cool if you could search the docs from your favorite editor? Daniel Choi thought so and created the excellent ri_vim plugin. It’s easy to install, but you will need to install it in each RVM project you want to use it in.

ri isn’t for everybody and that’s cool, but for those of us who are more at home at the command-line than in our own house, it’s indispensible. Try it out for yourself, you may very well find it’s more accessible and useful than you realized.

Troubleshooting

If you are not in an RVM project and you are unable to find ri documentation, try this:

gem install rdoc rdoc-data
rdoc-data --install

If you are in an RVM project, this should clear things up. bash rvm docs generate-ri

Further Reading

Read More

Determining the Number of Days In a Month

by Samuel Mullen

Posted on Apr 19, 2010


def days_in_month(year, month)
  Date.new(year, 12, 31).prev_month(12 - month).day
end

You’re basically grabbing the last day of the year, and then getting the day for *n* months previous (12 - month).

Update: For those who ask, “Yeah, but how do you find the first day of the month?” It’s “1”.

Read More

IRB: Global + Local .irbrc

by Samuel Mullen

Posted on Apr 1, 2010


I really like the way the Git SCM handles global and local configurations. Globally, you have .gitconfig and .gitignore files which define how Git behaves across all projects. Locally, you have .gitignore and .git/config files which are applied specifically to the current directory or project. This approach works very well and this morning I tried applying it to the way Ruby’s IRB works.

With IRB, you really only have a global configuration file: the .irbrc in your $HOME directory. In one of my current projects I have found that I type the same blob of code every time I enter the IRB shell in that project: I always need to load a couple libraries, connect to a database, dance a jig, etc. I really don’t like doing this and so I decided to take a page from Git and implement my own locale-based .irbrc. This is what I came up with:

# requires and stuff go here

def load_irbrc(path)
  return if (path == ENV["HOME"]) || (path == '/')

  load_irbrc(File.dirname path)

  irbrc = File.join(path, ".irbrc")

  load irbrc if File.exists?(irbrc)
end

# other ruby code in your .irbrc

load_irbrc Dir.pwd # probably should stay at the bottom

The load_irbrc method is recursive and will load every .irbrc file in the path, starting at the top, but stops short of loading $HOME/.irbrc. I did this to allow myself to build upon previous .irbrc files - well that and I thought it’d be fun.

Now, in my $HOME/.irbrc file I have global configurations and in my projects I have project specific configurations. Yea, me.

Read More

Book Review: Metaprogramming Ruby - Program Like the Pros

by Samuel Mullen

Posted on Mar 23, 2010


Overview

Paolo Perrotta, as the back cover of the book describes him, has programmed “for more than a decade and published hundreds of technical articles along the way.” Before “falling in love with Ruby,” he wrote in Java, C++ and C#. Once you begin reading this book, you’ll understand why he is successful as a Ruby trainer in Europe and why he is an Agile coach at Yoox

The Forward of the book was written by “matz” (an accomplished Ruby programmer as I understand it,) but it really doesn’t lend any weight to the value of the book because it’s just a summary of what Metaprogramming is and not an endorsement of the author or even the book itself.

Metaprogramming Ruby is divided into two parts (three if you include the Appendices): Metaprogramming Ruby and Metaprogramming in Rails. Although the section names are similarly titled they are quite different in what they seek to accomplish.

In Part I, the author places the reader in the role of a new employee paired up with a more senior programmer, Bill. Bill has “some months of Ruby under his belt,” and acts as a guide and mentor through the weird and wild world of metaprogramming.

In Part II, the book slips back into the more customary technical manual role. Building upon what was learned in the first section, the author delves into Rails itself and brings to light the hidden mysteries of ActiveRecord and ActionController.

Table of Contents

Introduction

Part I: Metaprogramming Ruby

  1. Monday: The Object Model
  2. Tuesday: Methods
  3. Wednesday: Blocks
  4. Thursday: Class Definitions
  5. Friday: Code That Writes Code
  6. Epilogue

Part II: Metaprogramming in Rails

  1. The Design of ActiveRecord
  2. Inside AciveRecord
  3. Metaprogramming Safely

Part III: Appendixes

A. Common Idioms
B. Domain-Specific Languages
C. Spell Book

What’s Good

At first I was a little put off by the storyline of Part I. Having read countless computer manuals and never having run across an instance where the reader was brought into the story, I was initially apprehensive. That apprehension quickly dissolved as I found myself more engaged with the book because of the storyline. I really don’t think there are many writers (of computer manual) who could pull off this sort of style and get away with it. Paolo Perrotta does so with ease. Not surprisingly, it makes reading this part of the book much more enjoyable and memorable.

What is good about this book by far is not the style the author uses, but what the book actually accomplishes: It makes the advanced aspects of Ruby seem not so intimidating. For many of us, metaprogramming really does seem like magic. We look at the code, scratch our heads, and just assume we’re too dense to get it. Metaprogramming Ruby shows us that not only is it not magic, but that any programmer can do it and even understand what others have done.

What’s Not So Good

I hate to say it, but what’s good about this book is also what is bad about it. When you have a technical book which is so approachable and easily read, it’s generally not going to perform well as a reference book. Personally I really don’t see myself using this book much. I plan on reading Part I again - maybe once again after that - but I don’t imagine I’ll crack it open after that. It will, however, make books like The Ruby Programming Language more accessible.

Conclusion

If you scratch your head in wonder at the magic you see going on in Rails and other Ruby projects, or if you think you’re just not accomplished enough as a programmer to access the more advanced areas of Ruby, check out this book.

For the Ruby programmer, it really is worth the time and effort to understand the more advanced areas of the language and this book will be a great guide to help you unlock the magic of Ruby.

Read More

BasicDaemon is a Basic Ruby Daemon Library

by Samuel Mullen

Posted on Feb 22, 2010


There have been a number of occasions when I have needed to a background process running: image manipulation, content transfer, queuing emails, etc. Initially I wrote the piece which forked off the parent process by hand. Not having read Richard Stephens sections on daemons (Advanced Programming in the Unix Environment) closely enough, I ended up with processes which didn’t get forked enough. Hrmm, this paragraph is going downhill quickly.

Realizing I was wasting a lot of time and making my code look like utter garbage, I began looking for a Ruby library which could replace and correct my forking code. The libraries I found (ctrl-f for daemon on ruby-toolbox) either did way too much, did things antithetical to what I was wanting, or was questionable.

Long story short: I wrote my own. I named it BasicDaemon because it’s just a basic daemon library; well, that and simple_daemon was already taken. It runs in two different ways: 1) you can subclass it and override the run method; 2) you can pass a block to it.

Here’s an example of subclassing BasicDaemon: ``` ruby !/usr/bin/env ruby

require ‘rubygems’ require ‘basic_daemon’

class MyDaemon < BasicDaemon def run foo = open(“/tmp/out”, “w”)

i = 1

while true do
  foo.puts "loop: #{i}"
  foo.flush
  sleep 2

  i += 1
end

end end

d = MyDaemon.new

if ARGV[0] == ‘start’ d.start elsif ARGV[0] == ‘stop’ d.stop exit! elsif ARGV[0] == ‘restart’ d.restart else STDERR.puts “wrong! use start, stop, or restart.” exit! end

exit ```

Here’s the example passing a block:

#!/usr/bin/env ruby

require 'rubygems'
require 'basic_daemon'

basedir = "/tmp" 

d = BasicDaemon.new

# for restarts to work properly, you can't use anonymous blocks. In other
# words, blocks have to be assigned to a variable
process = Proc.new do
  i = 1
  foo = open(basedir + "/out", "w")

  while true do
    foo.puts "loop: #{i}"
    foo.flush
    sleep 2

    i += 1
  end
end

if ARGV[0] == 'start'
  d.start &process
elsif ARGV[0] == 'stop'
  d.stop
  exit!
elsif ARGV[0] == 'restart'
  d.restart &process
else
  STDERR.puts "wrong! Use start, stop, or restart."
  exit!
end

exit

As you can see, both examples do the same thing: they put a loop into the background which logs the number of loops to a file (out) in /tmp. Wheeee!

That’s about it. There are a couple parameters you can set to define where the pidfile will be located or named, but not much more. You can check out the code on GitHub. If you don’t care about checking out the code, you can just install it from gemcutter with the following command:

sudo gem install basic_daemon

And of course, wouldn’t you know it, as soon as I tagged it v1.0.0 and pushed everything out, I realized something I could add to make it better. I guess that’s what v2.0.0 is for.

Read More

Test Driven Development and the Lowest Common Denominator

by Samuel Mullen

Posted on Dec 31, 2009


I am, admittedly, new to testing my code. Let me clarify that: I am, admittedly, new to using a formal method of testing of my code. Every programmer tests their code in some manner or other, but it seems only recently that test driven development (TDD) has become popular. Personally, I blame Kent Beck and the whole “Extreme Programming“ movement brought about by his book, Extreme Programming Explained. I’ve begun using a testing suite not because I’ve drunk the TDD kool-aid, but rather because I have begun to see it as a formalized methodology of what I have always done.

Let me explain what I mean: Because no one is perfect, we are all forced to test our work in some manner. For the longest time, I would perform a number of actions to test whatever it is I’m working on. I often cut and paste a snippet of code into a script or an interactive shell; I might use a foo.rb script to source libraries and call methods or print variables which I would later overwrite, comment out, or whatever as need be; oftentimes I’ll just try running the application and seeing if it works; you can check logs and core dumps; and of course, there is the ubiquitous ‘print’, ‘printf’, ‘puts’, ‘alert’, etc.

The main problem I’ve found with the various means I’ve used to test in the past is the temporal nature of the tests - I always delete the files or code bits when I’m through with them - which I inevitably want back once I’ve deleted them. And so I’ve come to the point where I’m desirous of a formalized (and permanent) method of testing. It’s not because I’ve read a book, heard a compelling argument, or even because everyone in the land of Ruby is doing it. I’m taking this path because it’s the most practical choice. TDD is really just the natural progression of what we’ve always done.

When I first came to this conclusion (using TDD), and after reading several articles on the matter, I decided to use RSpec for my testing framework. However, I have since rethought that decision and have decided to stay with Test::Unit. Don’t get me wrong, I think RSpec is really a brilliant framework. I know there are people out there that can’t stand it, but for being one of the first DSLs (Domain Specific Language) for testing, they really nailed it and its developers should be applauded rather than criticized for the work (it’s just that the latter is so much easier).

I’ve settled on Test::Unit simply because it’s the lowest common denominator. Regardless of who installs my projects on what platform or for what version I can be confident knowing that Test::Unit is there because it’s part of the standard library. I can’t be sure that RSpec, Cucumber, shoulda, or any of the others will be, and I don’t feel right asking that they be installed just to run my tests. Furthermore, Test::Unit is compatible with all of the testing DSLs just mentioned (it ought to be, they’re built off of it).

  • “lt’s fully compatible with your existing tests, and requires no retooling to use” – thoughtbot (shoulda)
  • “You can use the familiar Test::Unit assert methods…” – Aslak Hellesøy (cucumber)
  • “Did you know that rspec is interoperable with test/unit?” – David Chelimsky (RSpec)

Am I being a little extreme in this? Probably. Did this really deserve a blog post about it? Probably not. Is this a permanent choice? Until another library is standardized upon. Do I think you should choose Test::Unit too? No, I really don’t care; I want people to use what makes them the most productive. This was really just an exercise in understanding why I’ve made the choice I have.

Update 20120404: Mmm, no. For Rails I now use Rspec and Cucumber. If I’m building gems, I use Minitest. Not sure what I was smoking when I wrote this post.

Read More

inotify: When You Absolutely, Positively Have to Know About it Right Now.

by Samuel Mullen

Posted on Dec 20, 2009


At Universal Uclick, I deal with a lot of files. Comics, puzzles, advice columns, etc.: If it can be syndicated online, I handle it. Content is retrieved in a variety of ways (FTP, HTTP, internal filesystems [Samba, NFS, etc.]) and from a variety of sources. Because the content I deal with has deadlines and because our creators occasionally (almost never [wink, wink, nudge, nudge]) make a mistake, I’ve spent a lot of time trying to figure out the best way to retrieve files into our system as quickly as possible.

When I was first hired, files which had not been modified for over 30 minutes were considered safe to pull into the system. This really wasn’t ideal since, when combined with other parts of our system, moving a file from it’s origination to it’s final destination could take upwards of two hours. After rewriting a couple of key pieces and tweaking some other areas, I managed to reduce the time to about ten minutes.

I did a lot of things correctly in my initial rewrites - moving from croned processes to resident (daemon) processes - but I was still relying (incorrectly) on modification times to determine if a file was safe to bring into the system.

What I needed was a way of determining if a file had been closed (i.e. finished downloading) or not and some means of kicking off a process as soon as it was. I knew it was possible, Dropbox was doing it and it just seemed like a very Unixy sort of thing. Most of the research I did resulted in people performing a system call to the lsof command to determine if anything was accessing the file. That’s fine I suppose, but 1) system calls such as this carry with them a bit of overhead; 2) I’m never confident about the consistency of the response between systems; and 3) it’s a hack. I knew that if there was a command which could be called, there had to be library available to do the same thing.

After a bit of research I ran across the inotify library. The first paragraph of the manpage reads:

The inotify API provides a mechanism for monitoring file system events. Inotify can be used to monitor individual files, or to monitor directories. When a directory is monitored, inotify will return events for the directory itself, and for files inside the directory.

I could use the inotify system commands to kick off processes, but I’m not too keen on a process being launched for each of a 100+ files that might get uploaded simultaneously. The alternative, in my case, was to use a ruby library for inotify.

The library I chose is Nex3’s rb-inotify. It’s dead simple and sits right on top of the C libraries (errr, pretty much). To use it you just instantiate it, give it a directory to watch and conditions to watch for, and a blob of code to execute when the conditions are met. A simple example might look like this:

#!/usr/bin/env ruby

require 'rb-inotify'

notifier = INotify::Notifier.new

notifier.watch("/path/to/watch", :moved_to, :create) do |event|
  puts "I found #{event.name}!"
end
notifier.run

This example just prints “I found <filename>!” any time a file is moved into or created within the “/path/to/watch” directory.

For the most part, this would work for us as long as we weren’t doing anything too intensive with the files. Unfortunately, we are. Our system has to resize PDF, EPS, and TIFF files to web version, process crosswords and sudoku files, make sure everything is formatted correctly, move the files to our storage system, and so on and so forth. There are two main problems with this scenario as it stands: 1) if the system goes down while there are files which have yet to be processed, someone (myself) will have to fire the triggering event to get the files processing again; 2) This only works for one machine. It’s not scalable.

The two problems above are the reasons I ignored inotify as a possible solution last year when I began redesigning our digital asset management system (DAM). Since then, however, we began using RabbitMQ (an AMQP queueing system) for some of our processing and it occurred to me (thanks to Mike Admire) that I could combine the two technologies to achieve my goals.

Now, rather than processing files as they come into the system, I just send RabbitMQ a blob of JSON which other processes on other servers then retrieve to perform the necessary processing. Since the queueing is performed so quickly, I really don’t need to worry about missing anything if the server dies unexpectedly - I just have to worry about the server dying unexpectedly.

Now, full disclosure: I really don’t need to know the exact moment a file is done downloading. We’ve never received so many files in a single day so as to necessitate worrying about every second. Still, I feel better knowing that I’m doing things properly, using the libraries at my disposal, instead of some hack.

Read More

"No Colors Anymore I Want Them To Turn Black"...and White

by Samuel Mullen

Posted on Sep 29, 2009


I’m currently implementing a Digital Asset Management (DAM) system for Universal Uclick, the company I work for. One of the requirements is to keep both color and black and white versions of our creators’s works. This hasn’t been a problem with our previous system because we’ve used naming conventions to distinguish between the two versions. In this newer version, I’ve decided to use the same name, regardless of color or grayscale, and just store said parameter in the database. The problem I ran into, however, is determining whether an image is color or grayscale. 

I discovered that the answer is to create a black and white version (in memory) of every image and compare the two versions using ImageMagick’s “difference” method. But before I get into to that, we first have to deal with one little problem…

I’m using Ruby’s implementation of ImageMagick (RMagick). According to the docs, I’m supposed to use the “quantize” method for converting color to black and white. The only problem is, it doesn’t work with images like this Non Sequitur strip:

The solution, I found, is to use the “modulate” method, setting brightness to 1.0, saturation to 0.0, and hue to 0.5. Just for giggles I benchmarked the two methods and found modulate to be significantly faster. Here are the results from that:

           user     system      total        real
quantize  4.620000   6.820000  11.440000 ( 11.831151)
modulate  1.270000   0.060000   1.330000 (  1.125789)

Which one would you use? Yeah, I thought so. (Note: I did not run the benchmark against the above image, but rather, on a Garfield strip).

So now the image is being changed to black and white correctly. The next step is to compare the original to the new B&W version. To do that, use ImageMagick’s “difference” method and look at the resulting “normalized_mean_error”. If it’s 0.0, the image is already black and white, if not, it’s color. Here’s a snippet of code to do this:

require 'rubygems'
require 'RMagick'

class Chromatic
  attr_reader :image
  
  def initialize(img)
    @image = img
  end
  
  def chromatic?
    grayscale_img = self.image.modulate(1,0,0.5)
    self.image.difference(grayscale_img)
    
    self.image.normalized_mean_error > 0
  end
end

img = Magick::Image::read(ARGV.shift).first
img2 = Chromatic.new(img)
puts img2.chromatic?

Two birds; one rolling stone. You can now make B&W images faster and determine if an image is color or black and white. Outstanding.

Read More

About me


I live in the greater Kansas City area with my beautiful wife, our two great kids, and our dog. I've been programming using Open Source technologies since '97 and I'm currently an independent software developer specializing in Ruby on Rails and iOS. I am for hire.