« The power of good enough | Main | The Israeli model is back »

October 10, 2004

Comments

Stephane

Just two comments for the "Demonstration" part :

How they evaluate the “Number of potential users” (often, this criteria should be an efficient way to detect fuzzy project (based on my recent return of experience ... three last company I’ve met with non realistic figures and finally non well established concept/ system)
How fast are they able to switch from demo perspective to proof of concept implementation ? (if relevant / Allow to figure out how they make it work in the real world...) In that case, what i call “proof of concept” is a basic integration of the product in the ecosystem. (I've some examples in case you disagree)

Chris Powell

WRT Demonstrations: You can fool (nearly) all of the (non-specialist) people (nearly) all of the time.

WRT Innovation: "...languages will be supported by the interface (Java, C++..."? The point behind interfaces is that they hide the implementation. Unless you are referring to APIs and even then, most can be called from other languages.

WRT Architecture: Standards are what you make them. Is my system adhering to J2EE, J2SE or J2ME? In some cases, adhering to a standard can hit performance/innovation. Are my practises ISO900x/CMM Level x compliant? Unless you have an in-depth, independent check, how would you know? I've interviewed software candidates from CMM Level 5 companies who had no clue whatsoever about good development practise. That standard was just a name.

WRT Development process: Software metrics are one of the most misused, badly understood and poorly utilised measurement techniques ever. How is quality of software measured? Number of outstanding bugs? But what's a bug? Is it a bug, or is it a required 'late feature'?. Are we talking bugs/line, bugs/method, bugs/component, bugs/package, user expectations, conformance to (a moving) spec.? What level of bug? One man's serious bug is another's trivial bug. Perhaps it's ease of use (who measures this)? Performance? Functionality? Portability (which might affect performance)? There's coding styles, documentation, uninyended (internal) dependencies etc.etc.etc... I measure many, many factors in determining the quality of a piece of software ending up with a multi-dimensional set of figures from which decision makers will make a move (or not). And then I add in development processes (which can be fixed) and other peripheral observations which might affect development (eg: how often are developers interrupted to fix the bosses' email or whatever?).

Gregg Davey

I’ve seen a lot companies miss their projected synergies over the years because they didn’t properly conduct IT due diligence or factor in the critical role technology has on post merger business consolidations. Generally speaking, you can’t consolidate a business until without first integrating the technology, and thus any assumptions made around time lines to integrate the business that aren’t taking technology integration into account are simply pure speculation.

A good resource for IT Due Diligence: http://www.beaconintegration.com/resources/merger-blog/category/due-diligence

transparent laptop

Everything in this world is changing as we are. Likewise, the technology must be present to enable it to withstand both the competitors in the market and the demands of customers. For this reason we see small sizes ipods, laptops, and many other advances that technology offers.

cialis online

I measure many factors in determining the quality of a piece of software ending up with a multi-dimensional set of figures from which decision makers will make a move.

gauckleslie

I am sure you will love fake hermes belt with confident belt hermes to your friends

The comments to this entry are closed.

My Conference Schedule

Blog powered by Typepad