I've been puttering around on this project for a little while (started at WhereCamp Boston last fall) called Clashifier - it's intended to help people "train" a "model" of different land types by drawing on an example image. Then it tries to extrapolate that model to color any image by the same "training"
this is completely experimental but I wrote up a simple version which does not do a very good job, but is a pretty nice starting point, and I thought some of the programming-inclined on the list might be interested, as well as some of the remote sensing people. Code can be found at:
And look at nice pictures here:
Training by "drawing" on an image: https://www.flickr.com/photos/jeffreywarren/6704268799/ Clashifier trying to colorize an image: https://www.flickr.com/photos/jeffreywarren/6704267823
As you can see it's not very good ... yet! But there is potential to, say, view whole map datasets through this, and be able to:
- quantify a certain type, like "what percent wetlands is in this picture?"
- identify things automatically, like "how many oil slicks are there here?" or maybe even "are there any crops with striperust infections?"
I've made it a bit modular so that we can add better classification approaches to improve it -- right now it's just based on "dumb" cartesian distance in each color band.
This generated quite a thread of discussion on the mailing list: https://groups.google.com/forum/#!topic/publiclaboratory/AgB-dxEQqBQ
Another example of training: