The last and greatest HS/GH experiment is still undergoing but there are already lots of learnings:
The test is still ongoing - but the first Issue has been successful with 20 hrs turnaround time from post to pull request acceptance.
The experiment involved OD sourcing issues 3,4 and 5 of the boxchareditor repo.
The experiment is the biggest i have ran so far in terms of size/budget/expected amount of work.
New elements in the experiment:
1. Test Validation (node-tap based)
2. Coverage Validation (node-cover based)
3. Code practices Validation (jshint based)
4. Task Dependencies
None of the 1,2,3 were automated. The way that acceptance test was done by me cloning locally the developers fork and running
> cd /tmp;rm -rf boxchareditor; hub clone GulinSS/boxchareditor; cd boxchareditor;npm install;make
Make would run the tests, the coverage report, and lint and essentially prdocuce an error if any of the 1,2,3 validations were to fail.
The contractor applied to the job within 1hr from job post and immediately got hired.
It is interesting to point out that even though I invited past successful project hires the applicant responded faster. By the time I hire him he had gone to bed. He started working ~ 8hrs after job post.
Came back for syncing 15 hrs later - and from that point on we had an occassional back and forth
every hour or two initially in the issue comments and then on skype.
Interesting conclusions:
- Even though it is absolutely meaningful and productive to capture in written documentation, instructions every element that may be a recurring question from a contributor, still ongoing interaction, (eg, communication guidance iterative review ) between the two parties are things that are normal and should be happening as part of a task. This may appear against the more pure idea of HS - which involved complete operational instructions and no communication besides the acceptance phase but I think it is a necessary compromise.
The contractor in this case
a) suggested to refactor the code following a significantly better architecture
b) the contractor suggest to unify some of the follow up tasks - given the new refactoring.
c) the refactoring exposed later a difference in the actual end user behavior of the system
All of these made sense, all of these required interaction. The system ended up better as a result of these interactions. Overall I spend approximately 1hr in these interactions - while my estimate is that the developer spent > 10hrs developing them.
I had to spend another 1hr in preparing the issue of which approx 20 minutes were the mechanical aspects that are being automated by gl.
Still, the leverage I obtained was very impressive.
Issues 3 and 4 were dependent on 2. This implied a few changes in the flow:
- Jobs for issues 3 and 4 were created as private (awaiting the completion of Issue 2)
- My plan was/is to make the dependent job available (ie invite) to the depending job's contractor and only if they were to reject it make it public.
- The language in the template was updated to explain that successful hire would get first dibs to the dependent jobs as well.
It is interesting to note that the successful developer didn't have time to do features 3 and 4 showing again the importance of the global liquidity and the difficulty in relying in a preset pool of folks.
No comments:
Post a Comment