Friday, September 28, 2007

A little follow-up

Sometimes soon after writing something I see something related to the post, but I've been lousy about doing anything about it. But this week, particularly with a desire to recognize the generosity of friends & strangers, I will.

  • Following my mention of the Cambridgeport chickens, the Boston Globe mentioned some more free-range biotechophilic poultry: there is a wild turkey living near Kendall Square in the vicinity of Biogen Idec. I've seen one gobbler down near North Station, but never there -- which is mildly irritating since used to walk through there a lot. I'm also reminded of the flock of enormous feral white geese that hang out down by my old haunt of 640 Memorial Drive, though for some reason they don't stir much affection from me (though they have some very passionate defenders every time the parks department suggests thinning the flock)

  • One of our summer interns stopped by for a visit & with a grin announced she had a present for me. I was quite mystified -- and then startled & thrilled to see the nanopore paper in her hands
  • . The paper is more review & overview & planning/dreams than data, but it's hard not to get a little caught up in the enthusiasm. Imagine getting 200 nt/s from nanopores packed in at nearly micron spacing! The article itself doesn't expand much on the process of transforming the sequence into a different defined sequence (with each nucleotide translated into words of multiple nucleotides), but does explore a bit more why this would be useful -- to generate more widely spaced signals -- in a sense, the nanopores just read too quickly. However, it does mention the company (Lingvitae AS) with the technology, and they have some slick animations. The details are a bit sketchy, but with a mention of TypeIIS restriction enzymes (those that cleave outside their recognition site), ligation and the fact that the new words are added at the opposite end, one can make some guesses about how it works -- probably involving circularlization. It does sound like you aren't going to get the super-long reads once dreamed about for nanopores, as there isn't talk of long sequences being transliterated into even longer ones, but if you really could get the throughput & have nanopores grabbing new DNA molecules after they finished with old ones, it is possible to imagine getting really amazing sequencing depth.
  • A reader was kind enough to use a comment to point out another article on the new Genome Corp, and indeed showing a bit more conscious connection to the original. The article is still spotty on details, but does have two tidbits. First, is a strong hit that electrophoretic separation is still in play here -- but presumably on a very micro scale -- perhaps on-chip? Second, Ulmer wants to set up a very highly optimized DNA factory, not a company selling machines or kits. Pondering different genome sequencing business models is at least a post in itself, but since I currently work in a highly industrialized DNA factory it does hold some resonance

Casting a beady eye on kinases

A huge area of drug discovery is targeting protein kinases. There are about 500 protein kinase-like proteins in the human proteome. Some of these are probably not active (pseudokinases; review [paid]) and periodically there are claims of protein kinase activity in novel proteins, but that's the ballpark. An increasing number of drugs target these, with Gleevec as perhaps the best known but a large parade of others coming forward.

Most kinase-targeting drugs compete with ATP in the active site. The ATP-binding site of kinases shows a lot of conservation, and so cross-reactivity is a big topic in the field. What is desired for specificity depends on the target & disease & one's tolerance for risk. Gleevec was originally touted as being laser-focused on BCR-ABL, but it actually hits a number of kinases and many of these have yielded new markets, such as KIT for gastrointestinal stromal tumors. Being 'dirty' may be useful in oncology, where many kinases may be contributing to the tumor's growth & survival. On the other hand, in chronic diseases one probably wants a really focused drug (or at least can't tolerate one that isn't).

I got involved a little in kinase screening back at MLNM. The workhorse in the industry are in vitro assays using purified kinases. These are useful and can be run en masse, but everyone knows in vitro isn't always predictive of in vivo. Furthermore, despite diligent efforts by a number of vendors, not every kinase is available. So, the field is ripe for development of new approaches, especially ones which explore the compound in vivo.

In an ideal world, there would be a complete panel of biomarkers specific for each kinase which could be used to measure the impact of a compound on every kinase-regulated pathway in the cell. That's a long, long ways off -- only a few kinases really have good, reliable assays & many are essentially uncharacterized.

The latest Nature Biotech has a nice paper from CellZome on a proteomic approach to the problem & an accompanying News & Views item from a top mass spec person (either link prior requires Nat Biotech subscription, but Cellzome has the paper for free also).

The strategy is to derivatize beads with promiscuous kinase-binding compounds and use these to pull down bound kinases for identification by Mass Spec. Such has been described previously, particularly by the company which Cellzome acquired which had been doing this work. Big advances in this paper is to use iTRAQ labeling reagents to enable accurate quantification of the bound proteins and to use a competitive-binding format to assay test compounds.

The competitive binding angle is clever. Past efforts tried to derivatize the compound of interest to link it to the beads. This has many undesirable properties: the linkage may change the binding properties, each compound must be worked out separately. etc. The new work identified a set of standard promiscuous-binders which can be used to get a large fraction of the kinase repertoire. This same set can be used with just about any compound. By using these in a competitive format, where what is measured is how much the compound of interest disturbs the binding profile of the kinobeads, quantitative measurements can be made. Entire binding curves can be pulled out from a few experiments -- and binding curves for every kinase reliably pulled down by the beads. Slick!

Their coverage of the kinase world is quite good, though not perfect. In a set of experiments they pulled down 307 kinases. While that isn't everything, it's a lot -- and some of the rest may be pseudokinases or not expressed in any of the tissues they looked at. However, some probably just aren't bound well by the reference compound set. Whole small subtrees are missing from their Figure 2 -- examples of the missing include all the GPCR kinases (BARK, GRKs, etc), the two TLKS1, 2/4 polos, the WNKS, none of the 3 Akts, a number of miscellaneous cell cycle kinases (CDC7, BubR1&R2, etc. Some of those are mighty interesting, but of course the solution is to find more compounds to put on the beads.

What's also nice is that the assay isn't limited to kinases -- lots of other stuff comes down too. Initially it was apparently clogged with heat shock proteins (using a n ATP analog as the probe). But, in the current format they found a good sampling of non-kinases, and for Gleevec identified a candidate off-target with some known biology.

So what's the catch? Well, there are a few caveats. First, it is a binding assay, so hits need to be followed up to demonstrate actual inhibition. Second, it is going to be ATP-site specific. That covers most current kinase inhibitors, but there are probably many out there that work outside the ATP site, and this method will be blind to that (or to off-targets bound not by ATP-mimicry). Third, many of things being pulled down won't have much known about them -- do you worry about it or not?

As described in the M&M, the assay requires 5 mL of cell lysate -- not tiny, but not whopping. So this probably wouldn't be applied to every compound coming out of medicinal chemistry, but perhaps to either characterize representative members of particular lead families and to characterize compounds that are quite a ways down the med chem path.

One other interesting bit at the end: instead of adding the test compounds to lysates, they added the compounds & waited a few hours. As they note, many kinase inhibitors have slow off-rates -- they stick to their target rather fiercely. So, the assay can be run after a delay. The peptide mixtures could then be subjected to phosphopeptide enrichment & identification, enabling the phosphorylation state of downstream kinases to be probed & examined in relation to the concentration of compound used.

Thursday, September 27, 2007

What exactly is Sanger sequencing?

Today's GenomeWeb contained an item on yet another genome sequencing startup, Genome Corp (which was the name proposed for the first genome sequencing company). Genome Corp is being started by Kevin Ulmer, who has been involved in a number of prior companies (for a quite effusive description, see the full press release).

Ulmer is an interesting guy. I heard him speak at a commercial conference once & he had the chutzpah to put up the famous Science 'and then a miracle occurs' cartoon with reference to his competition, and then launch into a description of his own blue sky technology. If I remember correctly, it involved capturing nucleotides chewed off by exonuclease & then cooling them to the liquid helium range. Not that it can't be done, but it wasn't actually a high school science fair project either.

The technology is described as "Massively Parallel Sanger Sequencing", with the comment that Sanger is responsible for 99+% of the DNA sequences deposited in GenBank. I hadn't actually thought about it before, but I probably annotated somewhere north of 50% of the bases generated by what would have been the runner-up method a few years back, Maxam-Gilbert, due to some genome sequencing projects run while I was a graduate student. Multiplex Maxam-Gilbert sequencing actually knocked off a few bacterial genomes, but those were corporate projects which were kept proprietary (by Genome Therapeutics, whose corporate successor is called Oscient) -- and those were annotated by a version of my software. Never thought to toot that horn before!

It also reminded me that somewhere I saw one of the other next-generation technologies (I think it was Solexa's) described as Sanger sequencing. Which leads to the title question: what is the essence of Sanger sequencing? I generally think of it as electrophoretic resolution of dideoxy-terminated fragments, but if you think a bit it's obvious that Sanger's unique contribution was the terminators; Maxam-Gilbert used the same electrophoretic separation. So, by that measure, Solexa's method is a Sanger method. On the other hand, ABI's SOLID isn't (ligase, no terminators) nor is 454's (no terminators). 454 could be accomodated by stretching the definition to using a polymerase and unbalanced nucleotide mixtures to sequence DNA, but that seems a real stretch.

The press release didn't really give much away, and a patent search on freepatents didn't find something quickly (though it did find another scheme of Ulmer's using aptamers, a periodic idee fixe of mine) There have been publications describing minaturized, microfluidic Sanger sequencing schemes retaining size separation as a component (e.g. this one in PNAS [free!]), so perhaps its in that category.


The funding announced is from a public (or quasi-public) fund supporting new technology in Rhode Island. It's not really commuting range for me (not that I'm looking for a change), but it is nice to see more such companies in the neighborhood. There's at least one other Rhode Island based next-next generation sequencing startup I've seen, so perhaps the smallest state will yield the biggest genomes!

Tuesday, September 25, 2007

A First Commercial Nanopore Foray?

Today's GenomeWeb carried the news that Sequenom has licensed a bit of nanopore technology with the intent of developing a DNA sequencer with it. The press release teases us with the possibility of sub-kilodollar human genomes.

Nanopores are an approach which has been around for at least a decade-and-a-half -- a postdoc was working on it when I showed up in the Church lab in 1992. The general concept is to observe single nucleic acid molecules traversing through a pore. It's a great concept, but has proven difficult to turn into reality. I'm unaware of a true proof-of-concept publication showing significant sequence reads using nanopores, though I won't claim to have really dug in the literature. Even such an experiment would represent a small step but not an imminent technology -- the first polony sequencing paper was in 1999 and only in the last few years has that approach really been made to work.

Which is one reason I'm a bit apprehensive as to who bought the technology. Sequenom has done interesting things and has a great name (I had independently thought of it before the company formed; if only I had thought to cybersquat!). But, they have had a rough time in the marketplace, and were even threatened with NASDAQ delisting a bit over a year ago. Their stock has climbed from that trough, but they're hardly flush: only $33M in the bank and still burning cash at a furious rate. Can Sequenom really invest what it will take to bring nanopores to an operational state, or will nanopores be stuck with a weak dance partner which steps on its toes? I hope they pull it off, but it's hard to be optimistic.

It would also be nice to learn more about the technology. I found the most recent publication of the group, but it is (alas!) in a non-open access journal (Clinical Chemistry, though oddly Entrez claims it is). I might spring the $15 to read it, but that's not exactly a good habit to get into. The most enticing bit in that the current version apparently relies on generating cleverly-labeled DNA polymers that somehow transfer the original sequence information ("Designed DNA polymers") and then detecting the sequence due to passage through the nanopore activating the labels. It sounds clever, but moves away from the original vision of really, really long read lengths by reading DNA directly through the nanopore. The question then becomes how accurate is that conversion process and what sorts of artifacts does it generate?

A Parent's Worst Nightmare

Today's Globe contained a story sure to cudgel the heart of any parent: an apparently healthy 6-year-old girl collapsed & died during a suburban soccer game this weekend. Details were not yet available, but in such cases one class of causes are cardiac arrythmias.

Such horrible events are very rare, but still very concerning since they injure or kill persons who otherwise would have very long futures ahead of them. One response to this is to suggest screening all young athletes for arrythmias. Like all screening exercises, these run the risk of many false positives, which can incur financial, medical & always emotional costs.

With widespread personal genome sequencing around the corner, there will certainly be interest in trying to use this information to prevent such tragedies. The fact that a number of polymorphisms relevant to such sudden collapses are already known makes this not at all hypothetical. However, just as with screening by other methods, it is likely that such tests would be crude for quite a while going forward -- too many causative mutants will be unknown (false negatives) and some of the seemingly harmful variants will prove to be either incorrectly labeled so or not harmful in the particular personal context (e.g. another variant suppresses the effect). Furthermore, since such events are rare it will be challenging to find more such variants -- especially if there are a large number of rare variants predisposing to such events.

In any case, it's hard not to cross one's fingers -- no parent should have to worry about a routine childhood activity carrying invisible risk.

Saturday, September 22, 2007

My old company announced some very good news this past week: their Phase III trial for Velcade in newly diagnosed multiple myleoma had halted early because the experimental arm was performing so much better than the control arm. The trial tested a standard combination of myeloma drugs, melphalan and prednisone, vs the same pair of drugs with added Velcade.

The structure of the trial is a good illustration of how cancer chemotherapy most frequently moves forward: an agent which shows activity alone is tried in combination with existing chemotherapy regimes in the clinic. This is a conservative approach and has moved therapy forward, but it also has several glaring shortcomings. For example, it is very unlikely that compounds lacking single agent activity will be tried, though it is certainly possible that there exist compounds which would work only in combination. Another example is that this is just one of many combinations being tested; there really aren't good ways to determine which combinations to try, other than trying them. Pre-clinical cancer models aren't particularly good other than for very rough estimates, and small trials might miss effects -- particularly if only a defined subset of patients would benefit from a particular therapy.

Myeloma therapy has advanced greatly in recent history, with Velcade being an important contributor to that. The other big new drug in myeloma is Celgene's Revlimid, which is a follow-on to thalidomide. Thaliomide is, of course, one of the worst horror stories of drug history, having caused scores of severe birth defects when used as a morning sickness drug. Thalidomide's resurrection as a chemotherapy drug was impressive, as is Celgene's business cleverness in getting a monopoly on an off-patent drug, by patenting the safety system designed to present a re-play of the birth defect catastrophe.

Millennium was once positively giddy about Velcade's commercial prospects -- at one internal class the person in marketing gleefully exclaimed that 'our competition will be thalidomide and arsenic' (arsenic trioxide being another newish drug for myeloma). Revlimid's rapid ascent was a rude surprise, and since it is an oral drug and Velcade an injectable, a difficult competitor -- particularly when there is really not any rational way to determine which drug should be used in which patients (or whether they should be combined, which has looked promising but will bankrupt any payer). The fierce competition is one reason Millennium all but erased my department last year.

It's worth noting that our therapeutic ignorance is really pretty great for both. For Revlimid, the molecular target isn't known. A microarray paper (PNAS open access) this summer suggested an underlying reason for its utility in another blood malignancy, but the results are far from ironclad. Whether they apply in other malignancies is a question. On the other hand, we know the molecular target of Velcade (the proteasome), but why tumor cells -- and why particular types of tumor cells -- are sensitive to this remains a mystery. The literature is full of hypotheses, but again none are really nailed down in a convincing way. Given that ignorance of mechanism, it isn't surprising that we are in the dark as to how to combine the drugs or pick out diseases (or disease subsets) to use them in.

Will we ever move from incremental, conservative, empirical approaches to some rational, mechanistic hypothesis-driven approach? I wouldn't be optimistic for the short term -- there is still too much we don't know. But perhaps in a decade-or-so time frame, perhaps we will really get a handle on the mechanisms of the disease. That's a wild guess & perhaps pessimistic, but on the other hand we are just now starting to get therapies (e.g. Iressa, Gleevec) from the oncogene research that emerged from Nixon's War on Cancer (when I was but a wee lad) -- which makes a decade time frame wildly optimistic.

Friday, September 21, 2007

Mail call (ooph!)

A few weeks ago I came home to find someone had mailed me a phone book -- at least that was my first impression. The return address of the old shop suggested the explanation for the bulging package -- the full text of a newly issued patent on which I am listed as an inventor.

I've lost track of how many patents I have -- it's not a huge number, perhaps a dozen, and a few trickle out periodically. When I had to interview last year, I did go track them down to get the resume right. It could have been a deluge -- the paralegals once made a habit of booking me for an hour so I could autograph my way through a mountain of applications.

I non-chalant about it because the patents are part of that dubious flood of gene patents from the genome gold rush. Nobody knew whether they would be worth anything, but more importantly nobody wanted to be caught without one should they prove valuable -- so the lawyers made a fortune. On my end, in most cases my contribution was my development of the software which sieved the molecular databases -- I was more of a meta-inventor than inventor.

I'll never really know what, if anything, comes of most of them without a lot of work, as the patent titles are broad and vague. My notorious gene numbering system will be immortal though: many of the patent titles mention them. There is one major exception to this: one cluster of patents led to a compound currently in clinical trials. My contribution was clearly very small and very early, but it is nice to know that something good might come out of it.

I did somewhat expect the huge package -- not that I am claiming clairvoyance. No, I had advance warning, also by post. There are several companies which will put your patent number or title on a wide variety of knick-knacks, such as T-shirts, coffee mugs, plaques, etc., and their mailings spring forth as soon as the patent issues. That's how I've always known when a new patent came out -- because I got junk mail. Funny system.

Monday, September 17, 2007

Ah, Sweet Success

I had mentioned a few weeks back my struggling with a difficult programming problem. At that point, I thought I was close to success. Well, I was closer than when I started, but not by much.

Eventually however, I cracked it! Actually breaking down and consulting my one computer science textbook helped. An even bigger step towards success was realizing I had bitten off more than necessary: by reducing the scope of my problem by a bunch, I could really make life simpler.

Once I had something working, a number of different urges set in. One is to clean up the code. This can range from just clearing out all the bits-and-pieces that didn't end in the final solution, to a 'refactoring' where you redesign the whole program organization to what you would have done if the successful approach had been apparent from the start. I opted for mostly housecleaning, as the worst thing is to redesign your code back into a non-functional state!

The other two urges are to optimize & to add functionality. Optimizing for speed can be a bit of a siren's song, as you can always tweak out a little more performance. A wise (and brotherly) sage once tutored me to optimize only where necessary, and I try to stick to that. The initial implementation ran so slowly it was exasperating to troubleshoot, so on went the optimizing gloves. Luckily, there was an obvious way to cache intermediate results which went a long way. Indeed, after one set of caching I realized I could toss out some troublesome code I still didn't trust -- speed & simplicity in one package!

Adding functionality is another matter. In my case, the algorithm is satisfying a complex constraint-satisfaction problem. My main challenge is discovering all the constraints: many are more or less company lore, meaning you have to climb the mountain and present your proposed solution to one of the gurus so that they can show you the error of your ways. But, one encouraging sign that I solved the program in a good way is that in most cases the constraints fit neatly into the current framework; I really haven't had to rethink the code in a huge way to fit any in. So I can do something right.

The whole process can be humbling on so many levels. For example, there were some off-by-one errors that were quite difficult to shake out of the code -- indeed, I finally realized that one wasn't an error in the code but rather my attempt to eliminate it represented an error in my thinking. Some others took a while to get out, and one little annoyance just popped back up again.

One humbler is the fact that the approach I finally took was one I had considered and rejected earlier as too difficult to get right -- but I ultimately realized it was really much more intuitive to implement than the others, and perhaps more importantly much easier to check & interpret intermediate steps towards the solution. Looking back, I see this as the programming analog to what Derek Lowe recently commented on for medicinal chemistry: sometimes the correct solution is right in front of you, but you have to travel a long ways to discover this.

But a real ego-trimmer is to discover that your code is smarter than you are. The program actually weaseled its way into certain clever solutions that were not only good, but seemed to violate the algorithm. Indeed, some of the earlier implementations might well have explicitly forbidden this. Amazing how smart a dumb programmer's code can be!

Saturday, September 15, 2007

Scoping out DNA

One of this week's GenomeWeb items mentioned an extension of the research agreement between ZS Genetics and the University of New Hampshire. I've heard their head honcho speak a couple of times, and ZS Genetics should be interesting to watch. They propose to (nearly) directly sequencing DNA using electron microscopy. Because DNA, like most organic materials, isn't very opaque to electrons they have a proprietary labeling scheme to label the DNA in a nucleotide-specific manner. Electron microscopy is essentially monochromatic, but if I remember correctly the concept is to grey-scale code the various nucleotides.

One attraction of this sort of scheme is a vision of very, very long read lengths -- the ZS Genetics talks mention 20Kb or so. Such long reads have all sorts of enticing applications, from reading through very complex repeat structures to directly reading out long haplotypes.

The devil, of course, is actually doing this. The data I've seen so far suggests that this approach is definitely in the next-next gen category, along with various other imaging schemes, microcantilevers & nanopores. ZS recognizes that they have a ways to go & propose near-term applications in single-cell gene expression analysis. The danger is that they end up stalled there or worse.

Friday, September 14, 2007

Some serious swag

Steven Syre's often excellent Boston Capital column in Thursday's Globe discusses the amazing situation Harvard is in. Yet another endowment investment manager has walked away from the job of managing a mere $35 B.

That's right -- 3.5 x 10^10 dollars. Greater than the market capitalization of any biotech firm except for Genentech or Amgen. It apparently now kicks out $2B a year in funds to be spent, which is still more than the market cap of most biotech companies. As Syre puts it, only 8 other U.S. universities have total endowments larger than the growth in Harvard's endowment last year. Boston's fabled Big Dig came in (grossly over budget) at a measly $15 B. If a human genome can really come down to $1K/person, that will be enough to sequence 10 million people -- more than the population of Massachusetts! -- and by then the endowment will probably have grown.

I'm sure Harvard has no end to requests for this money. They apparently already waive undergraduate tuition for families earning less than $60K, and Syre asks whether they will extend that someday to all students. The big project in the future is to build out a new campus in the Allston section of Boston, on land which Harvard secretly bought up back in the 90's. A lot of that campus will be science buildings, which aren't cheap to outfit. Also, it isn't clear whether all these outside gains will eventually boomerang: are Harvard's managers really that good, or have they taken on a lot of risk (and so far been rewarded for it)?

It would be interesting to see the Crimson folks really think big with this. For example, how could some of these funds be used to support un-fundable research projects? How about fully-funding junior faculty until tenure? Could some be used as seed money to start companies to commercialize university research findings?

Thursday, September 13, 2007

Slow can be good

My usual mode of commuting is to walk to a station, take a train into Boston & then things get interesting. The biotech zone of Cambridge is not particularly near North Station, nor are there truly convenient transit connections.

There is a little blue bus sponsored by a consortium headed by MIT which starts at North Station & ends at my office. It's a good option, with a few caveats. First, Codon doesn't belong to the consortium (nor will they take us, we're too small), so it's $1 a ride. That adds up over a week. Second, the route maximizes coverage of employers over minimizing travel time, so other options can beat the bus, especially if you aren't in sync. Third, one needs options when the bus is running late or has just been missed.

For the part of Cambridge I'm now based in, any reasonable option is centered around the Red Line, but will also involve a bit of walking. Since the streets are in a pretty ordered grid in Cambridgeport, and we're a few blocks in each dimension from the subway station, there are a variety of reasonable routes.

The big advantage to walking is that you see things you would miss at the speed of a car or even the bus. Everything is closer & you have more time. I like spotting dogs & cats and looking at the details of gardens. There is one crazily cut & painted fence which I pass routinely, whose various designs & inscriptions could keep me interested for months.

Today I stumbled into an ethical dilemma: is it okay to taste the raspberries when someone's canes are sprawled through the fence & onto the public sidewalk? I resisted, but not by much. It was also a big surprise to find that Altus has a peach tree in their parking lot, which is laden with peaches. There's also a few grape arbors around, so if you know the right way to walk this time of year one can find the wonderful aroma of overripe Concords.

By far the biggest pleasant surprise I've ever had in the neighborhood happened a year or so into my Millennium career. I was walking from Fort Washington (a building now entirely occupied by Vertex, which at the time subleased it to MLNM), in a foul mood from the meeting which had just ended. I was pretty much staring at the sidewalk directly in front of my feet when a jarring thought came from my peripheral vision. Did I really just see a chicken? Sure enough, the little garden had 2 or 3 beautiful poultry strutting & scratching. I saw them there off-and-on for several years, but I haven't seen them for a long while so I assume they're gone. I really do miss the Cambridge Chickens.

Monday, September 10, 2007

Quite unlike Caesar's Wife

Massachusetts is buzzing with the news that the Mass Biotech Council, an industry group, had closed in a candidate to replace the ethically challenged former politician who had resigned as the previous president. The big news isn't only who it is, but the fact that the same person has been cleverly multitasking by nearly simultaneously interviewing for the MBC job & writing the Patrick administration's big biotech program. Uh, oh.

It's truly sad. Biotech generally has a good public reputation and is viewed as different than Big Pharma. Squandering that good will by even appearing to be engaged to double-dippers is disappointing. Many more shenanigans like this & biotech will be viewed as just another big business playing the corrupt political gain for private benefit at public expense.

The Globe has also reported that the MBC is looking less-and-less like an organization run by biotech executives and more-and-more like a pure lobbying group. I've attended some MBC-sponsored seminars and they were quite good. I'm not naive enough to think that lobbying isn't useful, but it is equally naive (but in the other direction) to allow it to take over.

It isn't hard to see how that slippery slope is entered. Millennium once sponsored a role-playing simulation, and Tom Finneran (the former MBC head who resigned after being convicted of corruption) is a charming guy. He'll charm your socks off. He'll activate every charm-induced promoter in your genome. But he also made the MBC look to the public like just another cushy landing for a charming pol.

Wednesday, September 05, 2007

Iconix bows out

One of the end-of-summer news items is that toxicogenomics firm Iconix will be purchased by Entelos, one of the small group of physiology modeling firms out there. The deal is worth between $14.1M and $39M, dependent on certain milestones.

Toxicogenomics is an area which just hasn't panned out as a business model. This was one of Gene Logic's big pushes, but they're mostly seem to be driving on their drug repositioning these days. At least one other company in the area came and went, along with my memory of their name.

Toxicogenomics has a strong appeal. Iconix & Gene Logic at the first level looked very similar: many compounds screened against key toxicology sites (liver, kidney) by microarray & then digested into predictive algorithms. In concept, you run your own compounds in the same models & profile them and then see which patterns come up. If you see a nasty red flag going up, the compound dies early and cheaply.

Iconix had a nice little roadshow that would stop in Boston every 2 years or so with a mix of academic and industrial folks talking about toxicogenomics & its near cousin genomic-profiling-for-mechanism-of-action-determination (MOAmics?). I went to at least two of them: I was interested & it didn't hurt they were free.

As low as $14M seems pretty cheap, and Gene Logic's downplaying of this business also suggests that the market is not strong for these services. Part of the catch is the size of your database: customers aren't going to like to find ugly effects later when they screened more expensive systems. If the Anna Karenina principle extends to toxicology (each unhappy compound is unhappy in its own way, or nearly so, then your database can never be big enough. The rosier view is that you simply get profiles for 'kidney unhappiness' or 'liver unhappiness' which are downstream of the unique insult. In any case, building up big databases of profiles isn't cheap, though that price is falling with various innovations -- so perhaps some of these companies were just too early for their own good.

One of the positions I explored after Millennium had a fair dose of toxicogenomics, suggesting that industry hasn't given up. But it may well be yet another area where Big Pharma doesn't really see the advantage of small biotech in doing it, or perhaps doesn't trust that work to outsiders. Myself, I was involved in a tiny way with one toxicogenomics project at Millennium, which had also decided to mostly go the in-house route (though they did license in the Gene Logic database) -- right before toxicogenomics pretty much disappeared. Actually, that wasn't the first genomics project at Millennium where I arrived just in time for the shutdown -- at least one other project (antibody production) had the same synopsis. Not something I want to think about too hard...

Tuesday, September 04, 2007

What I Didn't Read This Summer

In my last entry I commented on the stack of books that did and did not quite get read. Since yesterday marks the traditional American concept of summer, it's time to note what didn't get read in a more actively not read manner. Two books stick in the mind.

I tried to read The Black Swan by Nassim Nicholas Taleb, but quickly found myself skimming the pages & then not doing even that. Perhaps it was my state of tiredness or something else, but I just found the book grating. The contrast with How Doctors Think is striking: both deal a bit in the area of dealing with unusual or unique situations, and in neither case did I find myself consistently in agreement with the author. But, whereas Groopman comes through as humbled by the challenge & thoughtful of the issues, to me Taleb was obnoxious & arrogant. If someone found the book enjoyable, I'd love to hear why -- not because I want to argue, but maybe it would give me the incentive to try again. His previous book, Fooled by Randomness, was referred to me by a trusted source (actually, even loaned to me), but I never cracked it open. Debating whether to revisit that as well.

A more active avoidance was Michael Behe's latest Intelligent Design opus, The Edge of Evolution. I actually read his previous work & emitted a review, so I can do this. But it's kind of like a colonoscopy -- you know you should get one periodically but there's nothing pleasant about the thought of it. I have actually been challenged in a social situation to defend evolution -- a hazard of being known as a professional biologist (another is being asked to critique crank books & far-out 'alternative therapies' & nutrition schemes). On the one hand, it is appropriate to read what you might wish to criticize. On the other, there's only so much time for reading: why not spend it on the subset of books likely to be some combination of enjoyable & informative?