I often wondered how to list the locations of the files in a debian package after install, so if anybody else needs it - now it is here too
#dpkg -L libavcodec-dev
/.
/usr
/usr/lib
/usr/lib/i686
/usr/lib/i686/cmov
/usr/lib/libavcodec.a
...
/usr/lib/libavcodec.so
One thing is watching tv - that's a waste of time as it is, but the commercials make it even worse. So my wishlist includes an applicance like the noadtv proxy below (concept hereby made public domain GPLv3)
The noadtv proxy filters out the ads from the signal and replaces them with content chosen by the user.
The noadtv proxy box has the following connectors:
The noadtv proxy could work like this (concept hereby made public domain GPLv3)
The noadtv proxy then:
The network connection is used to configure the device as where to get the content for the ad pauses and a special setting can be used to make the the device look up the address of the content on a tvproxy website.
This website would then offer both free (still no ads!) and paid services that would enable users to choose between categories of youtube videos, scrolling tweets, facebook pages, slashdot pages or whatever.
If You have any interest in this appliance or have already made a similar one, then please send a mail to
I couldn't figure out how to make Facebook eat my opengraph meta tags until I discovered that the 'the linter' actually cares about the order of the attributes in the meta tag - 'property' must come before 'content'.
Below is an example in python and genshi on how to render these tags.
The <stripped> and <loop> tags are just my own convention for:
I might as well have called them <peter> and <paul>
Here's the code:
htmlpage=""" <html xmlns:py="http://genshi.edgewall.org/"> <head> <title>${page.title}</title> <stripped py:def="render_og_meta(property, content)" py:strip=""> <meta property="${property}" content="${content}"/> </stripped> <loop py:for="(k,v) in page.og_meta.items()" py:replace="render_og_meta(k,v)" /> </head> </html> """ data={ 'title':'hello opengraph', 'og_meta':{ 'og:image':'/img/blog.png', 'og:description':'opengraph description', 'og:url':'my url', 'og:site':'my site', 'og:type':'blog', } } from genshi.template import MarkupTemplate tmpl = MarkupTemplate(htmlpage) stream = tmpl.generate(page=data) print stream
The output of the above python program is the following:
<html> <head> <title>hello opengraph</title> <meta property="og:url" content="my url"/> <meta property="og:site" content="my site"/> <meta property="og:image" content="/img/blog.png"/> <meta property="og:description" content="opengraph description"/> <meta property="og:type" content="blog"/> </head> </html>
If You have any corrections or comments to the above post then send it to
It should not bee to hard to add content from the blog on facebook and other sites
So I went for one of the free services out there: addthis
Blog has been equipped with an 'addthis' widget to make it easier to share it on social networks. Next stop is to implement the opengraph protocol for the blog, so that it will give og:attributes that have meaning for the particular blog post the user is sharing and not the page in general.
I used to end up puzzled and with no colors after installing pygments - but hopefully not anymore after this little writeup
First download requirements:
# install the packages pip install markdown pip install pygments # find the styles available pygmentize -L # choose one and make a css file pygmentize -S manni -f html > css/pygments/manni.css
Then in your python program You can generate html from embedded code blocks with:
htmlfile = markdown.markdown(txtfile, ['codehilite'])
and finally remember to put the following in txtfile or in htmlfile:
<link href="/css/pygments/manni.css" rel="stylesheet" media="screen" type="text/css"/>
remember that code blocks must be separated from the text above with 1 blank line as well as indented 4 spaces
The homepage was more than a year old - time for a change, but I would rather not loose my content, so - migration.
Going from pipes and whistles:
To a more relaxed layout
There are many pages on my old homepage that are currently visited more than 100 times a day. It would be a shame to loose all that company - what are the options?
My solution will be the default method in cherrypy, but i create the rules just the same
the making of an initial migration script
I figured I'd have to change it a bit along the way so i made a script - which is also fun;-)
The following script will create a directory 'oldsite' in cwd and make a tree of structured text files inthere to be served if one of the old pages are hit.
The simple program below makes the 'oldsite' directory that can subsequently be served via logic in the new cms .
It also writes a set of rewrite rules if that would be a better choice for another setup than my cherrypy cms
The imports section
#!/usr/bin/env python # coding: utf-8 """ make a set of files with just the essential content write urlrewrite rules """ import os, sys import tempfile import shutil import subprocess import codecs from BeautifulSoup import BeautifulSoup
Specifying where to put the old content - it will be placed in Your current working directory after completion
rules_file = 'htaccess_rewrites' rules = None # writable file, created/opened in __main__ oldsite_dir = 'oldsite'
The function that does all the work needed for one file from the old site at a time
def copyfile(oldfilepath): newfilepath = oldfilepath.replace('/barskdata.net/','/%s/'%oldsite_dir) if not os.path.exists(os.path.dirname(newfilepath)): os.makedirs(os.path.dirname(newfilepath)) base, oldext = os.path.splitext(newfilepath) dirprefix, urlrelative = base.split('/%s/'%oldsite_dir,1) if oldext == '.html': ext = '.txt' newfilepath = base + ext urlrelative += oldext # write in rules file rules.write("RewriteRule ^/%s /oldsite/%s [NC]\n"%( urlrelative,urlrelative)) soup = BeautifulSoup(codecs.open(oldfilepath,'r','utf-8').read()) content= soup.find( 'div',{'id':'content_container'}).findAll( 'div',{'class':'floatbox'})[1] outfile=codecs.open(newfilepath,'w','utf-8') outfile.write(unicode(content)) elif oldext in ['png','jpg']: urlrelative += oldext rules.write("RewriteRule ^/%s /oldsite_static/%s [NC]\n"%( urlrelative,urlrelative)) shutil.copyfile(oldfilepath,newfilepath)
The main program that wgfets a copy of the old site and calls the above function for each file.
At the end the results are copied to your current working directory.
If you want to use it, don't forget to change barskdata.net to something else;-)
if __name__=='__main__': orgdir = os.getcwd() tempdir = tempfile.mkdtemp() os.chdir(tempdir) rules = codecs.open(os.path.join(tempdir,rules_file),'w','utf-8') subprocess.call(['/usr/bin/wget', '-r', 'http://barskdata.net']) for root,dirs,files in os.walk(os.path.abspath('./barskdata.net')): for f in files: print f copyfile(os.path.join(root,f)) shutil.copytree(os.path.join( tempdir,oldsite_dir),os.path.join(orgdir,oldsite_dir)) shutil.copyfile(os.path.join( tempdir,rules_file),os.path.join(orgdir,rules_file)) shutil.rmtree(tempdir)
finally the default cherrypy method to serve the migrated content
@cherrypy.expose @template.output('index.html') def default(self,*args,**kwargs): """ default page """ store, data = self.pagedata() oldsite = os.path.join(os.path.dirname(os.path.abspath(__file__)),'oldsite') oldfile = os.path.join(oldsite, *args) if oldfile[-5:] == '.html': oldfile = oldfile[:-5] +'.txt' if os.path.exists(oldfile): content = codecs.open(oldfile,'r','utf-8').read() data['teaser'] = u"arkiveret side - fra en tidligere version af hjemmesiden" data['exclusive'] = u"%s - arkiveret side\n\n%s"%(args[-1],content) return template.render(page=data) commit_and_report_to(store,data['errors']) else: raise cherrypy.HTTPError(404)
The new homepage is nearly done - now running on cherrypy and mod_wsgi. The choice was not too hard. Standing on the shoulders of (younger - much younger) giants it is probably best to take on something You know well and expect the giants to look after Your back while You're at it.
The cms is composed from:
None of the components are new - they all have a number of years and several releases behind them
I would like to make a limited toolset available for this cms like: