Saturday, March 30, 2013

Markdown and Phantomjs

Looking the evolution of use of markdown in sites like github, I feel that markdown is becoming a way to platformize your service.
travis-ci.org is an example of that: by embedding an icon in the home page of the repository whose content displays dynamically the build status of the repository you end up getting a truly dynamic repository home page.

Playing on this theme I was talking with folks at work yesterday.. I saw a friend awesome picture - he has just had a baby and made his desktop background a picture of his baby and his dog. It was truly an awesome picture. My first immediate reaction was to take a photo of it. Why? because I felt that what I and his physically collocated colleagues could see and enjoy and share-feelings with him, was not possible for all our remote, not-physically collocated colleagues. And given that this a thorn in my eye every time I see it, I took a picture to send it around.
My friend laughed at me explaining that all his friends have already seen the photo - only me the non-facebooking guy hasn't. I wasn't satisfied. What I cared was not photo sharing with friends.
The desktop background/screensaver plays a different role.
It is always there today tomorrow and makes the people that more closely work with a person getting a continuous (and often changing) vibe about their colleague personality, face in life, mood etc.
The desktop background is one more thing that makes face to face work relationships work better/differently than remore ones.
Ok, then I can fix it. Creating a mydesktopbackground.com site with a desktopclient that pushes the current desktop when it changes to a site making that same picture available to become my web background/homepage photo in my github pages, in my blog, in my homepage  background image.... all that are possible, easy make sense. And then you can actually not only see that picture but also the history of  the backgrounds... to see how you have changed. and maybe do that for all your desktops. And maybe do that for yourandroid/iphone background as well. etc..
One thing that is at the back of my mind that makes some of the connections with the first paragraph is that the wealth of tools like phantomjs that allow one to easily/freely with an API convert a complex html layout of an image/dashboard/report rendered as a .png or jpg allows any page to be "sourced" inside a markdown page. Let me repeat, before you had to rely on more open systems that allow one to include full html as part of an html widget (think disquss etc..) . But today even in the most restricted systems - as long as they allow the embedding of a picture, then you still have the option of including a the static rendering of that page. It's not click sensitive (note to self - I wondered what has happened to the original html image-map and whether I could generate using phantomjs not just a  picture rendering of a page but also an imagemap that would capture the different click targets that every subregion of the page has... that would be cool) but it can make a diff in how connected / integrated diff services feel.

Thursday, March 28, 2013

And more HS experiment successes

So far this has been a 4 out of 4 experiments all with success all with 12-24 hrs TAT

1. blogger -> github blog conversion (data entry)
2. csv generation for blog (python)
3. convert eclipse project to mavem
4. travis-ci integration (the most recent one - https://github.com/ogt/contractor-recommendations/pull/4)

They have involved 3 folks - (3 and 4 ) had the same people.

By now I get almost all candidates with all questions answered.


Still Issues:
 - I may need to ask people to do a final merge before submitting their pull request - to enable automatic merge
 - people still are bidding 11% on top of the budget
 - Inviting of people that have successsfully done jobs with related skills in the past creates a practical guaranteed success. They know the drill.
 - It makes sense to correct publicly people so as people can understand what they should do better the next time.
 - Compiling a directory of prior jobs, prior people that have done things seems useful.


Wednesday, March 27, 2013

More HS Experiment success


More HS Experiment success again.

This time worked with approximately 15 hrs total TAT and it involved in my view a more niche set of skills (heroku + maven)

Still, people are confused about
- Fork/Pull Request. The contractor in this case cloned my repo and told me that its ready for me to pull from it (instead of doing a fork/pull request)
- Contractor writing his own HS. The contractor in this case didn't understand that I was asking him to write what he did - he assumed that I was asking for instruction of what I should to receive/test his work.
- When to have the work by: there is no indication from my part about the timing. It may be better if I say - I expect this to be done ina few hours - and a max 24 hr TAT from job start. Will hire the first person that follows the applicant instructions correctly - if multiple will hire the person with the earliest TAT time.

Other issues
- Another debate is the confusion between the OD job post and the issue.
Currently they are almost identical but in reality they cant be - there is a diff assumed context between the two.
- I added prior jobs section as part of the envelop to provide proof of easy $s/Feedback and to collect jobs across clients
- A candidate answered with the screensnap I was asking for in the application - I ended up hiring him instead of another candidate that had also answered the questions and I was about to hire.

It seems that I need to
- Ask for github id as part of the standard envelop
- Ask for TAT in terms of hours

Also I may want to point out that this will stay as a public contribution - that the user will be able to point to in the future.

Tuesday, March 26, 2013

Life and death in twitter

I just found out in a skype chatroom -  they were talking about a python/django contributor that apparently died recently .... http://www.holovaty.com/writing/malcolm/ .

I followed the link to his twitter account expecting to find out more and I found just his stream of normal every tweets just suddenly silencing a week ago. Somehow macabre, somehow realistic, seeing this person talking every day, completely unaware that his tape was about to run out and suddenly there is no tape any more.

Monday, March 25, 2013

More HS experiments

Continued last weekends experiment.

I found a simple utility called pipdiff that is checking for all your installed python packages and displays whether they are available with pip and if they are wether they are at a diff version from the archive. I wanted to run it once against my system... but unfortunately the author had just posted it as a gist ... the util that test your pip installs is not itself pip installable.,
At that point I thought, "if HS were to exist" I would say, "I wish that this utility (url) to be installable with pip",


hit my magic wand and I guess within 24 hours I would get a notification "Your wish was granted", I would run

> pip install pipdiff
> pipdiff ....
...
and everything would work as I would expect home.
So , I promise I did yesterday morning something along the lines of that wish (~30-60 mins), and today (no other communication), started writing this blog, stopped went back to shell and did (I was that confident!!)


ithaca:pipdiff odysseas$ pip install pipdiff
Downloading/unpacking pipdiff
  Downloading pipdiff-0.1.tar.gz
  Running setup.py egg_info for package pipdiff
    
Installing collected packages: pipdiff
  Running setup.py install for pipdiff
    
    Installing pipdiff script to /usr/local/share/python
Successfully installed pipdiff
Cleaning up...
ithaca:pipdiff odysseas$ pipdiff
Django==1.4.3                            PyPI:Django==1.5
Fabric==1.5.1                            PyPI:Fabric==1.6.0
Jinja2==2.6                              
....

I was that confident!!
I then went and searched google (24 hrs after I made my wish)


and there I can find already indexed the PyPI utility as the 2nd result that google serves for the pipdiff keyword. (I actually find the corresponding github repo of mine that was used for the job as the 3rd result)

Here are the relevant links:

- Original repository issue - https://github.com/ogt/pipdiff/issues/1
  +  cloned issue above as oDesk job post : https://www.odesk.com/jobs/Register-python-utility-PyPI_~~35eac626ce139e45
- Python module : https://pypi.python.org/pypi/pipdiff/

- Pull request : https://github.com/ogt/pipdiff/pull/2


Sunday, March 24, 2013

Laptop in the sun

I left my laptop on the dinning table in the morning.
It went to sleep mode.
I come in and found the laptop woken up with the fan going in full blast  trying to lower the temperature.

Who could be doing what in my laptop?

The only explanation:
The day passed. The sun turned... the direct sunlight actually rose the temperature enough to make some laptop temperature alarm go off and make it wakeup....


Integrating Github flows with HS

Reflecting in last weekends experiment in HS I feel more and more that mapping jobs to github milestones is the wrong idea:
1. I can't seem to find a way to even point to a milestone page in github!! Pointing to issues works as expected but ... milestones seem to not be a top level object in github's model
2. I can't seem to create a hook against a milestone status change (so as I can automate for example my OD job completion
3. The milestone description editor isn't as good as the issue description editor.
4. It is really non-obvious to navigate to a milestone - it is more of a grouping tag than a standalone object

On the other hand mapping issues to jobs has its own weaknesses.
1. When the user submits a pull request ... it becomes yet another issue 
2. No obvious way to relate the two issues together.

The last thing seems a rather obvious weakness. I wonder why isn't that part of the typical use case of a person that runs his project from github. They capture various bugs they find, enhancements they plan to do as issues. The issue list also grows as people report bugs make enhancement requests. Everytime an external contributor takes on an issue and fixes it, the pull request will create a new issue.. Quick look at various projects shows that people use naming conventions like "fixes Issue xxx".

Looking at stackoverflow for preferred workflow... doesn't bring an exact match to this problem...actually it does... Surprising.. so it is feasible to associate a pull request to an existing issue but thats available only at the api level!!... or with my favorite gihub command line utility.. hub.

$ hub pull-request -i 4

This means that eventually we can  have our cake and eat it too. I think I am switching..
Next experiment will be using issues as OD jobs.

Monday, March 18, 2013

The most important decision of your life

A friend from the LDS church was telling me - more than 10 years ago something that somehow stuck on me.

friend: Do you know what is the most important decision of your life?
me: What? who I marry? (knowing the LDS view on eternal marriage)
friend: No,  thats important of course but its not the most important
me: Can you give me a hint?
friend: What is the most important, precious thing that you have
me: (I guess he doesn't mean my family... would it be my education? not precious.. would it be my memories...) I give up.
friend: Your time. The time that you were given is the most precious asset of yours. What you will do with the time that you were given is the most important decision in your life.

10 years later I was talking with a close friend that had also discovered (independently from the chuch in spite of his own lds roots) that answer. He was in a conundrum : if it was the most important decision in his life - he should be spending most of his time figuring the optimum answer - which was throwing him into a recursive loop of trying to understand his meta-existence.

Today I read that another person Aaron Swartz, had a similar fascination













- and I am trying to understand what is the common pattern here.

Community brain

You have a nice feeling when you are stuck on something and soon after that - as if someone has heard you, you see that problem being solved - and a solution popping up out of nowhere. Sometimes, its not the exact solution of what you were looking for - but it feels as if someone was actually eavesdropping your complains and heard you, heard you through the wall - broken phone - had to guess what you were looking for so they are not exactly right.

A friend 16 years ago (1996) was calling it the "community brain" and he actually was attempting to model as a multicasting lossy network of questions and answers.. (and patent it).

Anyway, the community brain today answered two questions of mine
- I was looking for a "Design Patterns"  book - but for Javascript. Hackernews answered today with a pretty close answer : http://shichuan.github.com/javascript-patterns
- A friend of mine have been trying to find an easy way to have a preview mode for a client side github repository (where the whole repo can be viewed as a real site) - without having to go through the usual gh-pages branch ugliness etc. Hackernews answered today with something similar - not exactly what we were looking for - still useful in some cases : http://5minfork.com/


Sublime and markdown

I just found a plugin for sublime that allows markdown preview

https://github.com/revolunet/sublimetext-markdown-preview

The interesting things is that it also does Github Flavored Markdown - the correct way - ie by using github's API and it also allows for live browser reloads as you write save (functionality that is provided from another module).
I am still using blogger for my blogs - but hopefully not for long.

(curious - how embedded images work with the sublime previewer...also how do relative links in general work... I need to test it)


Converting blogger blog to github blog

Following my previous post on this I decided tghis weekend to see how it would look to have my blog  become a repo - and still maintain some of its blog-ness.
First I used the previous post summary to create a job in OD - humanscript-style (or better here)
to get someone to do the manual part of the conversion.
The job took me less than 30 mins to write up and post.

My filtering questions proved very successful in easily determining the qualified person.
The first two applicants ignored them - the first person that answered them was immediately hired.



The job followed the "standard humanscript" delivery model. Fork my repo do the job when done issue a pull request for me to verify. I had already created the basic structure of the repo and hand-edited two random posts myself to be used as examples. (that preparation took another half an hour)

Right after hiring I sent a standard warm welcoming email - the applicant told me that they would have everything done by Saturday night. Throughout the day Saturday I was checking my email to see for any update - so as I can reply immediately. Here is the thread of messages I exchanged.


The job was a complete success. 
You can see the resulting repository at http://github.com/ogt/otdump. (You will need to navigate through the year-> month-> leafs to see the individual blog posts. As part of the conversion all images were saved in github (I wasn't as consistent before - some images I used were saved in blogger some were referenced). 
As you can see from the messages I missed an important question that the contractor discovered.
 - how to map tag/label functionality that I have used (otdump.blogspot.com/search?q=helloworld
The contractor also didn't seem to understand my request for incremental delivery.

Meanwhile, right after I posted the first job, I posted a second one. I still needed a script that generates a "home page" - the readme of the repository. I decided to do this in two steps. A script that creates the "db" in the form of csv, and a script that creates the the home page (and probably more pages than that) from the csv.

Here is the job for that:


In this case the first Friday applicant responded with a decent answer to my question.
But the quality of the answer didn't make confident to proceed with the hire. I opted to wait overnight to get more candidates. Next morning I had 4 people that have responded, 3 total have answered the question, 1 of them said honestly that he didn't know the answer to one of the questions, two had answered correctly both questions, but one of them did a more thorough job reviewing his response (less typos etc).. so I chose him - who also happened to be the least expensive ($10 instead of $15 - I ended up paying him $15 in the end)
Here the candidates cover letters: (1 failed-one question, 2,3 ignored questions, 4 answered


And here is the cover letter of the person I hired


I proceeded in parallel with the first job post - and I ended up having the output by Sunday noon.

Here is the message thread I exchanged with the contractor.



Note that again the contractor was able to find problems I didn't spec correctly.
Note that I didn't have to provide any credentials - my message threads could very well have been public discussion.

In  both cases I went to github accepted the pull requests which were merged without any issue.
I forgot to say that I created a milestone associated with each job and pasted in the milestone the job post - so as I have the instruction associated with each pull request. (dglittle does this by asking the contractor to include the instructions in the commit - which is something that causes usually confusion)
I want to experiment a bit using github's actual functionality (milestones ) for that.
Interestingly milestone text gives the github flavored markdown which is better than OD.  So it may be that I only put a summary in the OD post and put the rest of my job post in github - they do have all the necessary API as well. I manually associated with each pull request the milestone so automatically when I merged the pull request the milestone closed!. It was neat.

There is one last thing to do now - the generation of the readme file.

Sunday, March 17, 2013

Pita and Souvlaki recipe

Since I started on the recipe thing - I asked my kids to send me what they captured from my instructions (the particular recipe I have below became school project - unfortunately I don't have the pictures).
Note that these are not my instructions - its what my kid heard as my instructions


Souvlaki with Pita 


Ingredients



- 12 Pita Breads – (2 Trader Joe’s Middle Eastern Flatbread packages)
- 4 lbs of Chicken – (Organic Boneless Skinless Thighs)
- 1 lb Tomatoes
- 1 Red Onion
- Tzatziki sauce – See separate recipe
- Olive Oil – (Kirkland Extra Virgin Olive Oil from CostCo)
- Salt
- Pepper
- Oregano Leaves


Instructions


First we add plenty of salt, pepper and the oregano leaves to the chicken and we mix it up. Meanwhile we have started the barbeque to get it heated up. We barbeque the chicken for 10 minutes at the strongest setting – it is important that we don’t overcook it and the meet gets dried up. While the chicken is being cooked we cut the onion and tomatoes in thin slides and put them in bowls. At that time we also prepare the tzatziki sauce (see separate recipe). When the chicken is ready we place it on a board and we cut it in thin short stripes

and put it in bowls and cover it up to keep it warm. At that point we invite everybody on the table because the last step takes just a couple minutes and it is important that everyone gets a freshly cooked pita. We use a frying pan on high with a small amount of olive oil. We place the pitas on the frying pan, you can probably fit up to three pitas every time and cook it from both sides until the get brown and start puff up. At that point you either serve them directly on the plates or you make a stack.  Everybody gets a pita and mixes all the ingredients together and finally you get to enjoy your meal





Tzatziki recipe


Ingredients


- 1 Yogurt – Fage total Greek Yogurt from Trader Joes
- 1 Cucumber – long seedless “English Cucumbers”
- Garlic
- Olive Oil
- Vinegar

Instructions


You use the cheese shredder to shred the cucumber and then you squeeze it to get all the water out of it. You mix it with the garlic and the yogurt adding a bit of olive oil and a tiny bit of vinegar. You leave it covered in the fridge to get thicker.

Meat sauce recipe

My kids every now and then they ask me how we cook some of the foods.
I am not a "cook", so everything is pretty normal - but what I often capture in my instructions to them are the simple little things that I have learned understood and make my food, our food slightly different than the neighbor's food.

Here is my meat sauce recipee I just gave to my daughter:

Meat Sauce Recipee


Incredients:

- 1 Organic Beef 640gr (1.34lb) 15% fat Ground Beef from costco.
- 1 large white onion
- 2 cans of Tomato Sauce (425gr each- the ones I use have already salt/sugar/peper in it so extra salt/peper is optional)
- A bit of olive oil

Time for preparation 30-35 minutes.

Instructions.

Pour the two cans of tomatoe in a medium size pot that is on medium-high heat (I actually start on high and then I lower it when it start spilling)
Use water to clean any remaining sauce from the two can - essentially you add 1 can of water for 2 cans of tomatow. The more water - the longer you will need to wait.
Also the amount of time depends on how much the tomatoe sauce is actually pre-cooked. If you use just plain un-cooked tomatoe juice, it will be quite longer.
Taking it out before its time will have as effect that you will be tasting the sour-ness of the tomatoe. Some people dont' care about that.

Blend the onion to a puree and put it in a hot big ( I use 11") frying pan with a bit of olive oil until it starts to become brown.
You need to keep on stiring it with a wooden spoon.
When it is brown you pour it in the tomate saue pot and stir it. You will need to carefully scrape the frying pan with the wooden paddle - if you leave pieces of onion
they will turn black and some peoplke mind find black things in their sauce.
In the same hot pan you now put the ground meat (if it was frozen I defrost in the mw - high for 2-3 minutes, take out scrape the soften part - back in the microwave, high for 2-3 minutes - take it out scape the soften/thawed part etc.. you can have it ready to use in 5-10 mins from freezer)
Back to the hot pan.
Its important that the pan is hot. Puting the piece of ground meat - we should hear the sizzle.
The next  5 minutes is tiring. Using a big kitchen fork you keep breaking the lump of ground meat in smaller pieces.
In the beginning it feels like an impossible task - but as the meat cooks it breaks up easier and easier.
I like to find chunks of meat so I don't break it completely - but thats a personal taste thing.
You also keep in turning the side that is touching the pan - otherwise it becomes like a hard skin .
When the meat is mostly broken up - the pan should be practically cookiing the meat in its water.
It is important that you don't stop here. You want all the water of the ground meat out - pouring it now in the tomatie pot would result
a) the meat will feel boiled as opposed to sauted b) some of the heavey meat smell will be in the sauce.
After 5 minutes or so all the moisture is gone and now there is tiny chunks of meat that are sizzling on the meat's fat.
Thats good. You need to let it in this sizzling state for 2-3 minutes to get the burned fat taste and smoked smell.
You need to keep an eye for the smallest chunks of meat because they are the one that would turn dark first.
Keep on stiring now with the wooden paddle (no need for forl after the meat break up is done).
Anyway, when you are done with all that ... pour the ground meat in the tomatoe pot,
and turn it to a lower temperature at this point - and cover it up.
(it usually takes me 15 minutes so far and the rest takes another 15 minutes.)
Keep opening it it up ever 5  minutes or so.
When it spills too much around the pot - it means that it is ready.

Friday, March 15, 2013

Blog posting in github

Ok,
I spent some time playing around with the idea of using github for blog posting - or dump posting.

Note that I am not interested in the features offered by jekyl etc , ie the full cms/static blog generators framework. I want my blog to be as closed to a source repo - as possible.
Here are the features that I am looking forward
 - Anyone should be able to either fork the repo - make a fix/change to a page and issue to me a pull request - that functionality should be ideally available where they read the post.
 - Having access to the inline edit (and automatic fork/pull request) of github would also be very nice
 - I should be able to create/update my blog posts from my laptop with my sublime with minimal extra noise in the file
 - I shouldn't need to do anything beyond a commit / push
 - My folder hierarchy should be ideally similar to my url structure
 - I should be able to save pictures that are embedded in the blog

--
Ok, lets start https://github.com/ogt/otdump.
Just a plain repo. Directory structure 2013/01..03.
Adding the first blog post. I use the blogger's url as the filename - but with .md extension instead of .html. I guess I would need to be manually ctreating my slugs. I may creating a command line utility for that . The blog should have a bin dir with any tools it uses for its writing..

> pip install slugify
> echo "Hello world" | slugify
hello-world

Perfect .. exists already.
I copy paste in my sublime the content of the very first otdump post... hm - I can't preview locally.
Thats not good.. I need to commit push just to do a preview. Looking for offline github markdown preview tools... none that is doing what github is doing (they fail simple GFM things like url auto-linking fail). And the github guys provide an API even for that .. They are good.
Maybe the simplistic solution of having a github online editor open / copy pasting in it and hit the preview button isn't that bad.. but what would happen with all my included pictures...
Darn, so it seems that I would create my blog in sublime and continue any further editing online?
But when I need to add a picture? I would save it in the posts folder, reference it in the post using github's new relative link syntax (thats really nice actually) - I mean doing that locally "blindly", and then commit push to preview..  That is definitely sub-optimal. The reality is that for blog I should think of commit/push as a single command.. just like my online edits..
Maybe thats it - in online-edit  the save + commit+ push+refresh happens in a single command.
Thats what I need then. I need to be able to do just that (all but the refresh I mean) from my sublime editor - it should be feasible.

Ok. Just looked at the post - it looks decent. And it looks nice from my phone too - not sure if its the  recent improvements that github did for mobile rendering (thought I seen it somewhere - can't find it anymore..).
So I think thats a good start for now. I will probably add a utility to generate a home "readme.md" that looks like the blogger home, ie it has a the monthly indexes and the first portion of the the most recent 4-5 posts - maybe links to autogenerated monthly archives... am I recreating the static blog generator here.... maybe I am ... Will have to do the critique later - I am confident that my features aren't available as I ask them with jekyl... and if I find jekyl too much - imagine a random editor/reviewer I hire at OD to make some change fix in my posts. Talking about that - I need to capture the instructions of what I do for a single post, and then hire someone at OD to form my repo and do the same for every post of mine (and there are many by now) - essentially do the blogger -> GFM conversion for me).. and send me a pull request when they are done.. Nice I already feel good about this.


Thursday, March 14, 2013

Is my dump illegal?

As I was looking at the whole picture for the repository idea - the realization struck me.
I mean its obvious - I just hadn't realized it. Using google found images to embed into a private article to make it look more interesting is ok - but my posts aren't any more private. I mean nobody reads them, I have disabled inclusion in search engines - but still they are public. And the repositories are public and _social_. Which means that the inclusion of the pictures I have found in google is propbably violating someone copyright. Which means that if that someone is disney - they probably have already robots scanning the web for the existence of any of their pictures anywhere and then the contact the site host (blogger?) and will ask them to shut me down. I will wake up a morning with my personal - all but private blog disabled and a ceast and desist mail in mailbox...
Ooops.
I don't want to worry about that.. (my mind just forked a thread - my kids keep snaping pictures they find and sharing them with eachother at instagram - how come that is legal ... I guess they are not public and even though instagram enables practically the sharing of millions of copyrightes pictures by virtue of the fact that this happens behind the privacy of a social circle they are protected... interesting)

Anyway how about buying cliparts.... found clipartof.com  Wow, amazing variety and great tagging - it seems that this is even more useful than google images. And they charge $5 ...no $10 for a blog size photo. It seems a bit of too much for too little.. - fork - I remember spending $250 buying a collection of clipart from my puny balance when I started OD to create the website and when later the marketing dept came... they said I couldn't use it because I couldn't find proof of license!... Not only you have to pay money ... you have to keep for ever an archive of your receipts and licenses... its unreal..
No, even though browsing a clipart gallery seems to be actually a better approach than working with someone to draw for you what you may (or may not) have in your imagination.. it just doesn't satisfy me. Plus I feel like a sucker to be paying $10 for a clipart copy of a blog that I spent 10 minutes "dumping".

So what to do. 
Looking for "open source" libraries...http://openclipart.org/.. looks kind of decent
You can even edit online the photo


20 minutes later - quite impressed about the capabilities (and the interesting use-case optimization ) of  the online photo editor/imagebot but convinced that I am an awful illustrator (the train above isn't mine) and the variety of the free clipart libraries is too small and I still cannot tell if I need to prove that I am allowed to use that pic...

So back to the original idea - hire someone in OD - not only for pictures of the public repos but for pictures in all my blog posts.. (it may cost less at OD to get custom art than buying a copy from clipartof ... ). Of course if I spend money for the pictures - maybe I should have an editor do some minimal typo english fixing - I feel embarrassed every time I read my posts from how I write..

Of course doing that would have been so much simpler if my posts were github files - I could hire multiple people have them pointed to the blog posts and asks for edits as pull requests - (somehow that feels so much better than adding every random person I hire as a co-author in my blog...)..
Where did I got stuck and did the blog in blogger... it is doable...
fork - I shouldn't change the english - it is a dump of how my brain things - the errors - the occasional (or omnipresent) incoherency is how my thought looks like - and thats what I want to persist in this blog.

As I was saying to a friend there are tweets - badly syntaxed - full of typos - pseudo english 140 char sentences, and then there are blog posts...Well written, syntaxed pages/stories.
My blog post is a dump - neither a tweet nor a blog post.. it is as raw and immediate as a tweet but as long as thought thread wants to be -.... tweets, dumps and blogs... Thats it.
Where was I - I feel my stack is about to overflow (stack overflow ;-) ) too many open threads (I have to decide am I out of workin set memory from too many threads or out of stack-segment space (stack overflow --these things used to be diff ulimits in unix.. -- I have to stop - I keep on forking.


Adding pictures to my repos


I started full of excitement to add a picture in each one of my public repositories (note: I am talking about my hello world steps here). Sticking random pictures that I find by searching google-images - the way I do with my 10-minute-labor blog posts - didn't seem right. Repos represent minimally hours/days or more of work. They are worth a bit more attention. So I tried to find something better.

Clip art galleries gave some ideas.. but they all would require some level of synthesis from my part.. And I am a really incompetent illustrator.

Darn, what I do? I actually know what I want. I want to follow the style/brush/color of the substack guy and use it to make a drawing of a bear (I actually found a picture of a drawing of a bear that I liked that waves hello) for "hello world" then that bear eating milk and cookies for hello world with cookies, then that bear holding something that looks like the standard  "disk-drive" clip art that we use for databases - holding and scratching its head in a "what the heck is this" face.

As I play it in my mind - I actually want less of the personal style of substack - and more the typical cartoon style. ... looked a bit around I found some pictures of what _I_ think as "typical cartoon" style.

Thats better - it actually lends better for all my future ideas - they will all be this particular cartoon style - I can make it my personal style.

And now - what? What would I do ? I guess I would have to postpone the task for whenever I have time to hire an cartoonist at OD to do the job for me.. this probably means never.. Darn.
It would be nice if every time in the past that I have thought about a cartoonist marketplace - I had actually done it..
But even if I have ever build a cartoonist marketplace it would be dead by now - I haven't found (yet) the elixirium of life for my tiny startups - they only live as long as I breath air in them - they just die after I let them alone - abandoned from lack of use.

Plus it would have been different - My previous idea had something to do with "sticky figures".
This time my ideal service would be one where I point to an existing drawing give a few instructions and I get back within a few hours for ideally $10-20  the drawing of my liking.

Lets write-it up:

The service works in the following way:
  o You find some illustration that you like (input #1)
  o Then you provide some text instructions (input #2)
    (e.g. use same style, brush, colors etc but draw the person running after the bus instead of waiting for it.
  o You can use links to other pictures here, or you can add a photo of sth that you drew - from sticky figures to anything else.
  o A possibility here is that you are restricted in terms of what you can say - essentially forcing yourself into a very incremental process.
  o You pay for a just single step transformation. You always start with a drawing and you change just one aspect of it.
  o The drawing has to be a drawing in the existing collection - which means that we need to be saving/reusing the vector graphics - that gives a nice network effect
  o The drawing isn't owned by the user (its owned by the service)  - the user just gets a non-exclusively license to use the jpg/png version of it as they see fit.
  o The site gives examples of good "iteration steps" and bad steps.
  o The content isn't just the images - its the before and after... (maybe its worthwhile to expose the artists after all)
  o The site charges a minimal amount per step (e.g. $5) and enables short TAT (e.g. hours)
  o The jobs get auto-posted/autohired in odesk with the right picture / request / TAT
  o Requests that are complicated shoulld be send back with the suggested steps analyzed.
  o Starting from a fresh externally provided picture adds $50 to produce the vector graphics

Another alternative would be for the service to provide real time interaction ... Thats actually a rather different idea..
  o Seeing the person that you hired drawing it - is what you often need - you don't know what you mean until you see it.
  o You actually buy 5-10 mins of someone that you control with a chat.
  o In this case you can actually see how fast people have drawn what...  the content isn't just the before after its the complete screencast recording of the drawing happening

Hm... maybe I went overboard...

Lets go back to where I started..

Tuesday, March 12, 2013

Open startups

Typically folks talk about open source - and open source is obvious in the case of software for which you deliver an executable - you are supposed to provide the source for that executable. That doesn't mean a few random source files - but instead a while set of files and a makefile that you run make on and produce the executable at least for one platform. Not all your code is in source - it may be using for example platform libraries, and even if there is not code for these libraries (e.g. you provide source code for solaris and you are using some proprietary solaris libs) - most people will consider your software opensource
 - even though you just provided a single port
 - even though the port is for a non-open source platform
 - even though your code is (dynamically) linking modules that are not open source.


Why that diatribe on these 20-th century concepts.
Because when you are providing a web-service/web app/web site the question is re-opened - what does it mean to have that app to be "open source" . I remember quite a long time ago this question rage around sourceforge a decade+ old iteration on github. Sourceforge used open source elements and they themselves contributed to further improvement of these open source components but as the audience complained they didn't release openly "the glue", the rest of the facilities that would be required for someone else to do another sourceforge. The authors initially reacted that the glue is not code - its just various deployment/installation/configuration scripts (none of the language of today existed in 2000 to describes these more accurately - I do remember people calling it "glue!")  but eventually accepted the reality they were not open source and they could use open source components and they could expand with non-opensource GPL-licensed components and unlike in the case of executables they could get away with it. GPL 3.0 tried  force open-source-ness in the server side but not only it didnt get adopted but the opposite thing happened GPL fell out of favor and other less restrictive licenses become more popular in the years to come.
So what next?
I think there is a resurgence in the open-* use - to some extent driven by the success of a business model that says - if you are a free-app I give you free access if you ask for money I charge you. This type of "fremium" business model (exemplified by github, mashape etc) gives companies the opportunity to give a (costly) resource for free and boostrap their reach and still have means to make money without aggravating their original "free" users.

So I think that we will shortly need to think of "open-startups" in a mult-dimensional way.
- Open source : An open source startup is one that provides source code not just for its components but for its complete deployment system. The crowd-funding post gives an example of how heroku could enforce that : it could enable pulling from github repos (as opposed pushing only from repos in your own machine as today) and obviously for heroku to be able to pull - the github repo would have to be non-private. The combination that heroku supports only interpretive languages (as opposed to C for example) and that the complete deployment system (except the client issued heroku configs) is public would allow heroku to practically guarantee that this service can easily replicated by someone else that is willing to recreate all the third party systems (whether these are heroku addons or aws or.... anything else that is communicated via heroku config to the deployment system and acts like the "libraries" in the executable story above)
- Open data : This is a longer topic - but this blog has just crossed my 30 mins limit per post... so I will put a summary. If the service has data that is available to all users - it should provide a non PII  version of this whole db a regular backup (like stackoverflow does). It doesn't have to make it easier to read. If it has multiple stores it doesn't have to "unify" them. It just needs to provide a regular backup that someone can restore that doesn't include data that the company is not allowed to share by the law.
In addition to that it should allow the same for each user that includes all the data that the user has access to. Again it could be multiple backups of whatever db but it should include all the data that the user has access to in the system (excluding those that are accessible to all users and included inthe previous part.
- Open finance/Open profit :
The startup uses an open system to report its income stmt and balance sheet. It should be detailed and simplified enough to allow a casual user to see
  a) how much people that work in the startup make
  b) how much third party vendors cost
  c) how much money the company makes from which source
  d) how much own money the company has in the bank (various accounts), how much owed money it has and how much  money it is owed.  (balance sheet)

How can these two (open data and open finance) be enforced?... I have some ideas - but it may not be necessary to be enforcable.

Monday, March 11, 2013

Node and learning

Spending 1 hr reading and 10 minutes coding seems to be my current ratio..in using/learning nodejs
Things go still very slowly but they are moving forward.
Things I have learned today:

1. Its nice to put a picture in your project's readme - just like I do in the blogs - not sure if this is common everywhere but many node-folks do it.
2. Node is making people re-examine, return back to the basic unix philosophies - see for example shoe
3. The art of unix programming by Eric Raymond (the cathedral/bazaar guy) seems to becoming a simple way to explain design choices (I guess its mostly Ken Thompson's philosophy - Eric is just the muse)

4. You can always find people that have thought what you have thought and wrote about it - but the trivial differences in their perspectives can drive to  a completely different path crowdfunder paas leading to open sourced heroku
5. What I thought was a friend's unique programming model seems to be followed by a good portion of nodejs community. Node makes you want to unify the writing of your client and server code and that results in some of the common challenges/solutions/patterns being faced and addressed by many members of the community.
6. Still haven't looked how to produce this

image in my project's home pages. Its not that I have anything that builds and passes - I gues I do - even my helloworlds should have that...
7. The guy that wrote shoe and browserify works/has a company called browserling.com . Very simple easy and for occassional use free, sessions with any major-browser/resolution/versions to facilitate browser testing

Friday, March 8, 2013

Flash startups thought stream

A friend mentioned yesterday that he is interested in flash startups.

The first question I asked was - what do you mean by flash startup?
A startup that takes minutes? hours? days? months? (note that there is more than an order of magnitude degree of separation in these definitions.)
I think he assumed that I was not serious about the "minutes" (I was)
He answered. Hours. Creating a startup in a few hours.
He proceeded by explaining how one for example can go for napkin-drawing of an interface to design, to a prototype to a real product probably in hours by using on demand pools of available experts.
I was a bit disappointed. I was disappointed because 
a) he didn't jump to the seemingly impossible goal of doing x in minutes (as opposed to hours/days)
b) he didn't jump to the truly harder goal of building a company with a possibly scalable business model(ie a startup) in minutes and instead focuses on writing software faster (the napkin design to product cycle)

When you press the boundaries - only then you can see truly inherent obstacles - problems - only then you can really come up with something revolutionary. And someone that is in research (as the friend in question) has all the excuses to do exactly that. Anyway I am probably too hard on the guy - good thing he wont read this post.

Anyway its not that I have the answer I don't even have a question.
I do have a thought stream - its a bit amorph still so it may be fully readable... Maybe if I were to add some pictures it would 


--thought stream begins--

I see an org-chart - which has a CEO at the top, VP of prod, eng, mktg, cfo, vp ops  next layer, and within each department I can see a whole subtree of roles and positions, ops->cust support, billing, Mkg-> PR, SEO, SEM, Copy, Eng-> Deve->front/back, QA, IT,  etc etc..


The org-chart isnt though static as above but it expand from a single node outwards.
You can think that you ask the question if I wanted a single man startup what would that person's role title be ? E.g a product guy? or a sales guy or a developer?
How about if I had 5 people how I morph my org chart? would I move out like star - maybe not,  I would probably keep the CEO/Mktg/Sales/VPEng hat and put 4 folks under the engineering as  developers or I would stay as the sole engineer (e.f. a CEO/Eng*) and add one Ops person and one Mktg person - because it hard to find someone that is good at both of these.

Think that you have the full understanding of this tree, ie the roles that various kinds of startups may need up to the side of a few hundred people. You understand their roles, their outputs and you can answer easily the question "If I have a company in that sector and it has X people tell me its most probably org- chart - where all emloyees/contractors are?". Assume that this knowledge is so well understood that it can be captured in a program. Now assume that you can hire on demand any kind of talent  that your orhchart refers to from "CEO and CFO to copy editor to one man legal dept... and not only you can hire them on demand but you can rent them for a rather short amount of time.

The next think I see is along the lines of iterative/pipelined work. But think it not only within a single function as examplified in the post above. Think it in a cross functional model.

I see a person making a VC presentation about their flashstartups.com company (domain squatted).

The person asks the audience for a random idea for a startup. He picks one of them and starts idea ping-ponging with the audience - google hangout open - and on another screen that blank window named "My Idea" with a single dot "Me"  showing the beginning of the startup as an empty idea and just me in it.

As I start describing my idea, (the hangout, typing editing as descibed earlier) as soon as there is a well written paragraph of that idea, a mktg hat pops up in that star-expanding org chart, pushes the paragraph to a "guess  a domain name for this service/hat.
As the various domain names are being suggested before I even reject them , the domains themselves are pushed to the prod->design hat that is asking for napkin logos ideas, while domain and paragraph together are becoming now input to the site "tagline". Meanwhile prod has already created a site-template registered it with launchrock, against the first domain, picked a style and started brining in some of the suggested  domainname , one liners, mktg copy. In all those things as you speak your idea you do have the freedom to interact reject, you can control that interactive orgchart stopping it from expanding too much in areas that you don't want to see things happen and you see at the same time
heads/hats with rates, and $s keep showing up, your balance paid increased second by second while a filterable unified event feed gives you a visibility of everything that is done at your startup in a single timeline visible to you as well as possibly to other in its entirety or subfeeds.

Within a few minutes as the first hacker news post from the mktg dept shows up the launch page has visitors, your phone buzies, and in that VC presentation, the door knocks and a task rabbit brings in tshirts with the logo/tagline printed on it  (*that* is impossible - so it requires cheating - a van in the parking lot with an iron press and a printer to do the t-shirts - but the cheat is important to make it clear that everything deemed impossible is possible)..

----thought stream ends--

What I described above was how I was thinking flash-startups 2-3 years ago when I was talking about them with a friend from Ukraine that got me excited with startup factory ideas.

Today, I think the above will look less magical - but it would have the same result.
There is no magic at all in creating a "days (or even hours) -needed" mobile app startup and launching it on android. Still it achieves all the aspects of the hats above by relying on the platform to provide the basic ops/mktg/legal/finance stuff. So no need to create any of that yourself. Making that model work outside the confines of a mobile appstore is a big part of creating a startup in a few hours.
To make a startup in minutes - that probably requires the more exotic ideas like the ones in my thought stream.


Thursday, March 7, 2013

Iterative pipelined work

Magic:

You first open up something that looks like paint program, 
you draw a stick figure
Within 10 seconds as you draw a second frame shows where you are watching in realtime another screen - where an illustrator programis tracing your stick figure and coverts to a professional looking sketch. Within a few seconds yet another frame shows where a 3rd screen appears where someone is using Photoshop and he adding coloring, texture shade, on the second person's sketch. As you stop you see the illustrator drawing not stopping expanding - as if they were playing drawing ping pong with you trying to figure out what you had in your mind. As you the person he draws you realize that he drawing a man - you meant this to be a child. You add some short a caption - child asking and you add some more in the context - the illustrator goes back and quickly changes the faces and body proportions to child like - while the photoshop guy repaints the walls with pastel colors.

Reset.

You open up what looks like an editor and give the summary of an algorithm.
Within 10 second a second editor window shows where you are watching a person following
creating the basic functional blocks of the algorithm you are describing putting a simple comment on top of each function. By the time you are done - that person is already in the primary function starting coding it. You click into their editor and change the commend of a certain function that you want to have a different result. You also comment on the side- that you want the utility functions and the test code separately.
As the second editor is starting to restructure the code template using the modified instructions you see a third window, starting filling-in some of the simpler functions and within another 10 seconds a forth window adds plugs/passes in all the functions, opens up an interactive console and checks for errors and goes ahead and start writing test functions.

Reset.

You start a hangout, and soon after that 4 people join the hangout. You start presenting them the idea.
The participants seem to be typing busily - and you see in the side the transcript of your text appear in a google doc - You can see clearly 3 userid filling alternating pipelined following you phrase by phrase. With each phrase of yours a new cursor starts while the last one continues typing what you have already said. As you are talking faster and faster you are seeing soon 3 cursors concurrently  typing chasing you. A fourth cursor, follows fixing typos, adding newlines commas etc.
Within a few minutes you see a fifth person opening the document - not the hangout. You see the whole content formated, paragraphs, section titles added, your own - non-oral mistakes corrected and reworded.


What is the "right" entrepreneur profile

I was talking with a friend about creating a service that helps in some way people launch their site/service/product

The idea here is a vast audience of "startup makers" - people that are spending their time creating some new utility, site, service
- the vast majority of these efforts will never see the light of day,

  1. several never get off the discussion/brainstorming round, 
  2. some make it to the drawing board, sketches and workflows, 
  3. some make it to a blog/domain/launch page, 
  4. some make it to development, 
  5. some make it to an actual working product that the creator can actually use themselves , 
  6. some get to an alpha phase where the creator invites friends and fools, 
  7. some endup as a public utility that is public but doesn't full work, 
  8. some end up in a public site that works but is to expensive to just keep it 
  9. some have longevity but after obtaining some oxygen they quickly become irrelevant and forgotten 
  10. some end-up obtaining an audience that  persists and have a founder that is confortable to be putting the time/energy to keep the site up.

There are two opportunities here - one is to cater to the need of people to feel creators/founders/entrepreneurs - but do not build into your business model benefits when the entrepreneur succeeds. The other - which feels more morally correct - is to try to help people succeed and  rely on them to succeed for you to be successful.
In the beginning I was torn by the dichotomy above - but eventually I decided that there is no difference: a startup that helps wannabe entrepreneurs will have happy customers not when they do a successful startup - but when they manage to move ahead a step (see steps above) from where they normally get stuck. If you do that on aggregate you will improve the end-success rate of the funnel.

Anyway  the next argument was that people get stuck on a step because the next step is something that they really don't know/don't want to deal with.  A coder often doesn't want to deal with IT stuff, deployment and such, a back end developer doesn't want to deal with UI, design of the product and will make the api but not the end user product, a front end guy will never the backend - and he will keep on making "prototypes".. and the thing happens across more varied personality types, programmer guy, product guy, marketing guy, business guy. Not only that - but each one of them sees all the complexity of their own world but minimizes the complexity of the rest of the world - miscalculating the corners that need to be turned to get to the end. While the programmer may understand that they need programming and IT, the product guy, simply feels that they need a technical guy, and possibly the business guy things that he needs a technical product guy.

The next argument was that these audiences are very different - they almost represent different markets/products. If you browse sites like cofounderslab.com and try to understand where the various guys come from its all over the place. A service/product that tries to help programmers launch startups is quite different from a service/product that tries to help business folks launch startups.
On the other hand a programmer is known to have time but not money but a business is expected to have or to be able to able find easier money. Plus the conventional wisdom was that programmers aren't good at understanding business value... Plus programmers that are thinking of a startup are fewer than business folks that are thinking of a startup (browsing the cofounderslab seem to argue that too- there are more people looking for programmers than programmers looking for business people..).

Unfortunately the above is against what I was thinking of doing. My (hopeful) view is that the old conventional wisdom are wrong. Technical folks, people that come with cs degrees as opposed to business degrees, that know to code as opposed to make powerpoints are taking over the startup world. Their population grows disproportionally, because the world needs much more of them than it needs business people - they are the new pro-sumer class (in the programmer-consumer sense). They are not like the last decade programmer stereotype, antisocial/caveman not knowing marketing etc. Githubs and  stackoverflow has made them more social/connected. This newer much more attractive stereotype of the "github geek: has attracted scores of converts - and that has made the whole trend further grow etc etc.

So the service I want to do is a service that targets programmers. and it should be providing what programmers don't want to be dealing with. And that is....

Wednesday, March 6, 2013

Post online education era CV

I tried to picture the CV of a person in the post online education era and it looked rather different that today's linked in resume.


Instead of the education being at the bottom of the resume with a couple entries describing the degrees/universities, educational activity isn't part of the past. It is instead interlaced with all other activities in a continuous timeline.
It doesn't talk as much about degrees it talks about achievements, lessons, certificates obtained.
The lessons are specific  and clickable, "Marketplace design CS234, Stanford, Prof: J.R."
Clicking any of the courses would bring more detailed certifiable information about the achievement, grades, professor/TA comments, project links etc etc. There are courses from multiple universities.
There are internships. A timeline graph may also be shown that allows the reader to follow more prolonged activities, mentor-ships, board participation, organization memberships etc and the progress within each.
The other interesting thing is that "employment" relationships are shorter, overlapping with each-other with much less clear indication of full-time-ness.

The CV is interactive, almost like a filterable event feed. I can click checkboxes and see just one type of activity or search and filter only activities by location/category/engagement type etc.

Several CV timeline events link to public archives, be it blog posts, news articles, repositories or contributions.

One large vs many small - continued

Talking with a friend we realized that there is an additional disadvantage (for the small( in the "many small" vs a few large debate.

Today the world faces a significant information/innovation discovery problem.
There is already a solution to many of the problems and pains we are facing - we just don't know about it - the pain/path to find it is too costly - bigger often than the pain/problem itself.

When the large company, be it apple, costco, amazon aws, adds that innovation in its "shelves" we find it.
We already know that I can find decent cheap web services at aws,  good electronic stuff at apple etc etc so I will use them as the place to buy innovation from. (think of innovation for anything that involves change in my normal habits/way of working living: buyig an inexpensive packaged of vine-ripped organic tomatoes from costco is "innovation" in that sense.

Anyway, back to the many small vs large..
If many-small are to win the war against a few large - the problem of discovery will need to be addressed.  Appstore did that, to some extent google search and google ads did that, even ebay and amazon did that for their marketplace partners - they created "marketplaces" that facilitated discovery putting big brands/publishers as equals with thousands of small ones.

I think google (both as a search platform as well as an ad platform) is becoming an non-ideal platform for "innovation" discovery. Q&A sites like stackoverflow and quora are addressing some of this problem - but they are Q&A as opposed to marketplaces of new sites/services.

There is still something missing - but it can't be too general.. I am not sure what is its correct definition.

Monday, March 4, 2013

One large vs many small - who has the advantage

As I use the heroku ecosystem of services, add ons etc and I compare it to what you get in a larger centralized organization I almost always end becoming more confident that the business model of small well defined business services, small in terms of "employees", small in terms of funding required, small in terms of time that it took to get the service started is a much more attractive than that of a bigger company. The obvious question that comes to mind is whether a set of small companies like that can address an important need of a large population segment. I think yes but thats not the argument that I want to make right. As I was discussing the above with a friend he asked me what are the intrinsic advantages of a larger organization?
Is it reliability of services, Is it efficiciency? After discussing both of these for a while we concludes that a large enough, maturing marketplace of such business services would be addressing both problems.
As a small service creator I would have both reliable and less reliable third party to services to use and I would probably need tp resort into do-it-yourself solution as often as a product manager in a larger organization.
However, there is one thing that the larger organization will always have a big advantage against a set of smaller independent ones:

It would have access to a data warehouse that it contains the aggregate knowledge of all the pieces of the business, while in the other case every small service/business will not be able to see/understand things beyond its small sphere of data that it relates to. The larger the organization, the more it has the advantage (even if because of bureaucratic issues it doesn't use it) of knowing and understanding more of the world. That warehouse, full of relatively private data is typically given access to a controlled small team of business analysts, data scientists and such and ends up being the driver behind everything from product feature changes to strategy changes, pricing changes, acquisitions, reorganizations and such.

How would you address that inherent weakness?

One extreme possibility could be an extreme GPL-style model for data.
- You create a co-op network of companies that share a certain common rules for sharing open-data
- An individual that gets access to the open-data promising that any future companies/services that it creates for a period of xx months/years will follow open-data practices as well.
- An individual that is working/contracting/collaborating/influencing a close-data company is not allowed to get access to the data.
- End-users of open-data services accept a relatively lower level of privacy - to not asphynnxiate the system from privacy controls.

Another extreme (in diff ways)  possibility is to create the framework for data-leasing.
Essentially a set of legal contracts that allow a company to get "warehouse access" to your company for a fee. Why would a company ever do that? For funding purposes for example. A fundor is willing to risk a certain amount of money but for what benefit? Small private companies do not have necessarily exit paths, or dividents. But If the company succeeds its data would be more useful. If it needs/keeps the data for itself the investor can access it.  The framework could require that any persistent store used by the company enables a mirror or a daily backup process - enabling the entity that bought data access to incorporate the data if it wants in its granter data warehouse.



Sunday, March 3, 2013

worknotes - helloworld with db


Ok,
Looking back at my smaller steps plan blog post:

[1] helloword-cookies (cookieparser?).  DONE
[2] helloworld-db (mongdbhq?).  NOT
[3] helloworld-routes (express) ALSO DONE
[4] helloworld-session (?). NOT
[5] helloworld-authentication (passport, passport-local) NOT

It seems that the primary change from the original plan was the earlier introduction of express to allow writing the hello world following dglittle's code-style: ie make the index.htmk be just js functions that use apis to figure out the state and create appropriate displays. The server side APIs involve routing which brings the need for express.

So the next step would be to add the db layer.
In this simple next step we would
- on register
+ use the db to enforce username uniqueness
+ store the userinfo in the db
+ set the username in the cookie
+ return the complete userinfo
- on logout
+ reset the username cookie
- add a login screen that allows the user to switch to a prior user context.
+ checks for username existance and if found sets the username to the cookie
- add a change user info screen that is prefilled with the userinfo and allows one to change
the user to change first or last

I will use mongolab as the db.
Ok it seems that
> heroku addons:add mongolab
(which is equivalent to heroku addons:add mongolab:starter) ie free plan for mongolab)
is doing all the right things. It registers the add-on - and it seems to be somehow associated with the specific app. I am actually not sure what exactly that means.
I hope it means that I can get a free starter plan with every one app (as opposed to just one for my account). I hope it doesn't mean that I can't access the mongodb from two separate apps if I wanted by sharing the URI.  Even though there was a UI to add the mongo lab addon, the ability to do all that just from the command line  (with the command above) feels so ... good and powerful.
Befor I added the db code, I wanted to familiarize myself a bit with the db - I thought of using the mongo shell for that.
First attempt failed - it doesn't take heroku uris as parameter.
> heroku config|grep mongo 

MONGOLAB_URI: mongodb://heroku_app1234567:some_secret_pass@ds0123456.mongolab.com:45432/heroku_app1234567
What does the man page say?  no man page.. Googl-ing instead I find some docs, re-trying it, 
after 4 efforts and going back and forth googling for docs I am really annoyed that there is no man page.
How could it be that there is no man page? I just brew installed it. Maybe the manpath isn't right..
No, there is no mongo* in /usr/local/share/man/*/.
... 1 hr later... I have man pages.. it took quite some doing:
> brew install sphinx  #that probably wasn't needed
> pip install sphinx  # building the docs requires sphinx-build
> cd /tmp
> hub clone mongodb/docs
> cd docs
> make man # fails with some docutils error  #active bug
> pip uninstall docutitls # remove current docutils
> pip install docutils==0.9.1  # last 
> make man
> cd build/master/man/man1

> man mongo.. #yeahhh

> cp * /usr/local/share/man/man1
> pip install --upgrade docutils # bring back the current version
> rm -rf /tmp/docs #cleanup


Luckily right after that I managed to get mongo to connect:
> mongo -u heroku_app1234567 -p some_secret_pass ds0123456.mongolab.com:45432/heroku_app1234567
..
and I got in