drplokta: (Default)
[personal profile] drplokta
Congratulations to [livejournal.com profile] autopope, who is currently #3 on the BBC News website's "Most Emailed" stories, and #4 on "Most Read", with this.

(no subject)

Date: 2007-07-10 12:06 pm (UTC)
ext_16733: (inquisition)
From: [identity profile] akicif.livejournal.com
Aarrgghh. Not the 12C/13C magic wand again. Even conceding a gadget for laying down the right sort of atoms in the right place (which I very much disbelieve in), how does he expect to read the data back again?

(no subject)

Date: 2007-07-10 12:30 pm (UTC)
From: [identity profile] nmg.livejournal.com
By picking the atoms up one-by-one and weighing them, as ane fule kno.

(no subject)

Date: 2007-07-10 01:29 pm (UTC)
ext_58972: Mad! (Default)
From: [identity profile] autopope.livejournal.com
Ayup!

Actually, Steve's dead right about the "how the HELL do we read this stuff?" problem being a big one; but the point stands -- if we follow the logic of Moore's Law and Richard Feynmann's "There's plenty of room at the bottom" to it's logical conclusion, we end up with 6 x 10^23 bits per molar mass of whatever-the-hell-our-storage-medium-is, and to all intents and purposes it makes no difference whether we're using single carbon atoms in a tetrahedral matrix or, say, oxidation states in a transition metal in some complex supporting matrix -- one or two orders of magnitude difference in mass doesn't make any really significant difference to the social and cultural effect of $STORAGE_TOO_CHEAP_TO_DELETE being available.

Mind you, I'm willing to consider that any tech that can synthesize a diamond from raw atomic feedstock while discriminating C12 and C13 nuclei should be able to take a stab at reading it back out while dismantling it ...

(no subject)

Date: 2007-07-10 02:22 pm (UTC)
From: [identity profile] nmg.livejournal.com
any tech that can synthesize a diamond from raw atomic feedstock while discriminating C12 and C13 nuclei should be able to take a stab at reading it back out while dismantling it

And there's the rub. We've got used to what is effectively random access mass storage; seek times on hard drives are comparatively low, and there are usually enough other things that your processor can be doing to while away the microseconds until it gets its next block of data.

Most of the proposals I've seen for extreme storage at the near-atomic level either assume that you have to dismantle your storage medium to get at the bits, or that you've reinvented tape storage via some amenable long chain molecule with repeating subunits. In both cases, you'll probably be performing a linear seek to get at the data you want.

As your storage density grows towards NA bits/mol, and you accumulate more data to fill your storage, access time looks set to become the bottleneck. You can mitigate some of it using massive parallelism in the readers where possible (multiple tapes), but that seems to be missing the point.

(no subject)

Date: 2007-07-10 03:30 pm (UTC)
From: [identity profile] ajshepherd.livejournal.com
Wasn't it bubble memory (which was going to be all the rage back in the early 80s, I recall - replace hard disks and everything) that destroyed each little magnetic bubble on reading, so you immediately had to write back what you'd just read if you wanted to keep any of it.

(no subject)

Date: 2007-07-10 03:47 pm (UTC)
From: [identity profile] nmg.livejournal.com
You're thinking of core, I believe.

But yes, there are a wealth of existing algorithms and techniques for quite mundane-yet-essential tasks (sorting and searching, for example) that were discarded as new technologies became widespread; some of these could yet become useful again.

(no subject)

Date: 2007-07-10 04:36 pm (UTC)
From: [identity profile] ajshepherd.livejournal.com
Oh yeah. It was a long time ago!

(no subject)

Date: 2007-07-10 04:30 pm (UTC)
ext_58972: Mad! (Default)
From: [identity profile] autopope.livejournal.com
On the other hand, I'm thinking of long term archive media, for "long term" on the order of centuries or millennia -- possibly even geological deep time.

The issue here is good indexing and using metadata to keep track of the archived material, not to mention caching and hierarchical storage management. If it takes five minutes to discover exactly what great-great-uncle Bob was doing at 4pm on the damp Friday afternoon in July a hundred and twenty five years ago, I suspect historians can live with that. Hell, if it takes five days to retrieve that data, they can live with it: because it sure as hell won't take five minutes or days to retrieve the really interesting stuff that's already been accessed repeatedly, and it won't take more than milliseconds to seconds to access what happened to you five weeks ago.

December 2016

S M T W T F S
    123
45678910
11121314151617
18192021222324
2526 2728293031

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags