Actually, Steve's dead right about the "how the HELL do we read this stuff?" problem being a big one; but the point stands -- if we follow the logic of Moore's Law and Richard Feynmann's "There's plenty of room at the bottom" to it's logical conclusion, we end up with 6 x 10^23 bits per molar mass of whatever-the-hell-our-storage-medium-is, and to all intents and purposes it makes no difference whether we're using single carbon atoms in a tetrahedral matrix or, say, oxidation states in a transition metal in some complex supporting matrix -- one or two orders of magnitude difference in mass doesn't make any really significant difference to the social and cultural effect of $STORAGE_TOO_CHEAP_TO_DELETE being available.
Mind you, I'm willing to consider that any tech that can synthesize a diamond from raw atomic feedstock while discriminating C12 and C13 nuclei should be able to take a stab at reading it back out while dismantling it ...
any tech that can synthesize a diamond from raw atomic feedstock while discriminating C12 and C13 nuclei should be able to take a stab at reading it back out while dismantling it
And there's the rub. We've got used to what is effectively random access mass storage; seek times on hard drives are comparatively low, and there are usually enough other things that your processor can be doing to while away the microseconds until it gets its next block of data.
Most of the proposals I've seen for extreme storage at the near-atomic level either assume that you have to dismantle your storage medium to get at the bits, or that you've reinvented tape storage via some amenable long chain molecule with repeating subunits. In both cases, you'll probably be performing a linear seek to get at the data you want.
As your storage density grows towards NA bits/mol, and you accumulate more data to fill your storage, access time looks set to become the bottleneck. You can mitigate some of it using massive parallelism in the readers where possible (multiple tapes), but that seems to be missing the point.
Wasn't it bubble memory (which was going to be all the rage back in the early 80s, I recall - replace hard disks and everything) that destroyed each little magnetic bubble on reading, so you immediately had to write back what you'd just read if you wanted to keep any of it.
But yes, there are a wealth of existing algorithms and techniques for quite mundane-yet-essential tasks (sorting and searching, for example) that were discarded as new technologies became widespread; some of these could yet become useful again.
On the other hand, I'm thinking of long term archive media, for "long term" on the order of centuries or millennia -- possibly even geological deep time.
The issue here is good indexing and using metadata to keep track of the archived material, not to mention caching and hierarchical storage management. If it takes five minutes to discover exactly what great-great-uncle Bob was doing at 4pm on the damp Friday afternoon in July a hundred and twenty five years ago, I suspect historians can live with that. Hell, if it takes five days to retrieve that data, they can live with it: because it sure as hell won't take five minutes or days to retrieve the really interesting stuff that's already been accessed repeatedly, and it won't take more than milliseconds to seconds to access what happened to you five weeks ago.
(no subject)
Date: 2007-07-10 12:30 pm (UTC)(no subject)
Date: 2007-07-10 01:29 pm (UTC)Actually, Steve's dead right about the "how the HELL do we read this stuff?" problem being a big one; but the point stands -- if we follow the logic of Moore's Law and Richard Feynmann's "There's plenty of room at the bottom" to it's logical conclusion, we end up with 6 x 10^23 bits per molar mass of whatever-the-hell-our-storage-medium-is, and to all intents and purposes it makes no difference whether we're using single carbon atoms in a tetrahedral matrix or, say, oxidation states in a transition metal in some complex supporting matrix -- one or two orders of magnitude difference in mass doesn't make any really significant difference to the social and cultural effect of $STORAGE_TOO_CHEAP_TO_DELETE being available.
Mind you, I'm willing to consider that any tech that can synthesize a diamond from raw atomic feedstock while discriminating C12 and C13 nuclei should be able to take a stab at reading it back out while dismantling it ...
(no subject)
Date: 2007-07-10 02:22 pm (UTC)And there's the rub. We've got used to what is effectively random access mass storage; seek times on hard drives are comparatively low, and there are usually enough other things that your processor can be doing to while away the microseconds until it gets its next block of data.
Most of the proposals I've seen for extreme storage at the near-atomic level either assume that you have to dismantle your storage medium to get at the bits, or that you've reinvented tape storage via some amenable long chain molecule with repeating subunits. In both cases, you'll probably be performing a linear seek to get at the data you want.
As your storage density grows towards NA bits/mol, and you accumulate more data to fill your storage, access time looks set to become the bottleneck. You can mitigate some of it using massive parallelism in the readers where possible (multiple tapes), but that seems to be missing the point.
(no subject)
Date: 2007-07-10 03:30 pm (UTC)(no subject)
Date: 2007-07-10 03:47 pm (UTC)But yes, there are a wealth of existing algorithms and techniques for quite mundane-yet-essential tasks (sorting and searching, for example) that were discarded as new technologies became widespread; some of these could yet become useful again.
(no subject)
Date: 2007-07-10 04:36 pm (UTC)(no subject)
Date: 2007-07-10 04:30 pm (UTC)The issue here is good indexing and using metadata to keep track of the archived material, not to mention caching and hierarchical storage management. If it takes five minutes to discover exactly what great-great-uncle Bob was doing at 4pm on the damp Friday afternoon in July a hundred and twenty five years ago, I suspect historians can live with that. Hell, if it takes five days to retrieve that data, they can live with it: because it sure as hell won't take five minutes or days to retrieve the really interesting stuff that's already been accessed repeatedly, and it won't take more than milliseconds to seconds to access what happened to you five weeks ago.