Getting the schema for a DB table

Thomas Klausner domm at
Tue May 27 19:20:02 BST 2014


On Tue, May 27, 2014 at 06:33:16PM +0100, Simon Wistow wrote:
> On Tue, May 27, 2014 at 02:42:21PM +0100, Sam Kington said:
> > What's wrong with dependencies? It just takes a few minutes to 
> > install, disc is cheap, and it's not like any of the rest of your code 
> > is going to magically start loading these new modules and being all 
> > bloaty and insecure or whatever.
> - Process size
> - Start up time (but hey, at least it's not Rails :)
> - "Disk is cheap" is an article of faith, not of fact and only true for 
>   certain scenarios (e.g not embedded) 

Well, we use DBIx::SchemaChecksum in some utility scripts that are 
invoked maybe once a day per hand (and quite often less than that).

So neither process size nor start up time was an issue when writing it.

> - Developer pain such mas hoping all those 120 modules install with no 
>   errors and tracking down a bug to anywhere in the dependency chain

I've never had problems with installing deps since cpanm came out.

I usually do something like 
  cpanm --installdeps . -n -L local
go read, and everything is installed

> Not directly no. But indirectly ... that's exactly how loading libraries 
> work. They're still in my process memory.

I don't know what you're trying to achieve, but (as I said), we're not 
using this code in actual production code. It's supporting code, so it 
can be as big and slow as it needs to be, as long as it's easy to hack.

But I agree that one could extract the actual DBI introspection and 
checksumming code into a lean and low-on-deps-distribution, and then 
pack this dist into the command-line-utiltiy thing. Patches welcome :-)


for(ref bless{},just'another'perl'hacker){s-:+-$"-g&&print$_.$/}

More information about the mailing list