Category Archives: Programming

Catalyst Config Hack

With a lot of modules for our Catalyst systems we have separate models. We then use a subset of them in a single application, and it makes sense to actually store all those database models in a single physical database. This means we end up with a lot of duplicate model config keys in our catalyst config.

<Model::Processor>
    connect_info dbi:Pg:dbname=app_db;host=db-server
</Model::Processor>
<Model::SysParams>
    connect_info dbi:Pg:dbname=app_db;host=db-server
</Model::SysParams>
<Model::AuditTrail>
    connect_info dbi:Pg:dbname=app_db;host=db-server
</Model::AuditTrail>

A lot of database configurations aren’t just a single line, and you end up spending forever copy/pasting and then modifying the config. I wanted to come up with a way to avoid all that repetition.

The Catalyst::Plugin::ConfigLoader provides two potential hooks for things to do after the configuration has been loaded. One is a finalize_config, the other is config_substitutions, via the substitutions settings. Because we are using CatalystX::AppBuilder the finalize_config doesn’t appear to be hookable, or at least I didn’t figure out how to. The substitutions is however perfectly usable because that just requires config setup in code.

   $config->{'Plugin::ConfigLoader'}->{substitutions} = {
        duplicate => sub {
            my $c = shift;
            my $from = shift;
            my $to = shift;
            $c->config->{$to} = $c->config->{$from};
        }
    };

Then this lets me do this in the config file.

<Model::Processor>
    connect_info dbi:Pg:dbname=app_db;host=db-server
    connect_info dbusername
    connect_info dbpassword
    <connect_info>
      quote_char "\""
      name_sep .
    </connect_info>
</Model::Processor>

__duplicate(Model::TokenProcessor,Model::SysParams)__
__duplicate(Model::TokenProcessor,Model::AuditTrail)__
__duplicate(Model::TokenProcessor,Model::AuthDB)__

This copies the configuration specified for the Processor to the SysParams, AuditTrail and AuthDB model config settings. This happens right after the configuration has been loaded, and before the models are loaded so all the settings are there just in time. That saves me lots of copy/paste, and even more editing. I don’t even need to copy those directives into my _local.conf because the _local.conf settings for the Processor model will be what get copied.

Tagged ,

Skippng Python unit tests if a dependency is missing (fixed)

I got some feedback on my previous post about skipping tests in python unittests pointing out my solution was flawed.  As Mu Mind pointed out, the denizens of stackoverflow pointed out the solution has a problem when run directly from python.  At first I didn’t realise how flawed; technically I had run my tests via python and nosetest regularly.  I just hadn’t realised that I’d never run the tests via python when I was missing the dependency.  If you do that you get this ugly error,

ERROR: test_openihm_gui_interface_db_mixin (unittest.loader.ModuleImportFailure)
----------------------------------------------------------------------
ImportError: Failed to import test module: test_openihm_gui_interface_db_mixin
Traceback (most recent call last):
  File "python2.7/unittest/loader.py", line 252, in _find_tests
    module = self._get_module_from_name(name)
  File "python2.7/unittest/loader.py", line 230, in _get_module_from_name
    __import__(name)
  File "tests/test_openihm_gui_interface_db_mixin.py", line 6, in 
    raise unittest.SkipTest("Need PyQt4 installed to do gui tests")
SkipTest: Need PyQt4 installed to do gui tests

It does tell you what the problem was clearly, but it really wasn’t the intention.  The idea was to silently skip the test.

From the answers and comments on the stackoverflow post I stitched together this ugly but hopefully working solution for whatever your unit test runner of choice is.

import unittest
try:
    import PyQt4
    # the rest of the imports


    class TestDataEntryMixin(unittest.TestCase):
        def test_somefeature(self):
            # actual tests go here.

except ImportError, e:
    if e.message.find('PyQt4') >= 0:
        class TestMissingDependency(unittest.TestCase):

            @unittest.skip('Missing dependency - ' + e.message)
            def test_fail():
                pass
    else:
        raise

if __name__ == '__main__':
    unittest.main()

I dislike the fact that I can’t hide away the logic at the top, but surrounding your whole test with the code works.  Then if the import fails we create a dummy test case that does a skip to indicate the problem. I’ve also tried to ensure that we only catch the exception we’re expecting, and pass through any we aren’t.

Now if you run the tests in verbose mode you’ll see this when there is a missing dependency.

test_fail (test_openihm_gui_interface_mixins.TestMissingDependency) ... skipped 'Missing dependency - No module named PyQt4'
Tagged ,

Hooking into the OpenERP ORM

I’ve been hooking into the OpenERP ORM layer of a few of the models to add full text search via an external engine. Thanks to the way OpenERP is structured that appears to be quite a reliable approach. As I was doing it I found that I wanted a common set of hooks on several models. Doing that suggested I should refactor the hooks to a mixin or a base class. After playing about with my OpenERP module I’ve come to the conclusion that creating a base class seems to be the most reliable way to hook the methods. The one trick you need to be aware of is the _register flag that you want to set to False for your base class.

class search_base(osv.osv):

    _register = False

    def write(self, cr, user, ids, vals, context=None):
       success = super(search_base, self).write(cr, user, ids,
                                                    vals, context)
       … do stuff
       return success


class product_template_search(search_base):
    _name = "product.template"
    _inherit = "product.template"
    _register = True


class product_search(search_base):
    _name = "product.product"
    _inherit = "product.product"
    _register = True

product_search()
product_template_search()

Without that you’ll end up with the orm whining that the search_base class has no _name attribute as it tries to register it as a model.

openerp.osv.orm: The class search_base has to have a _name attribute
openerp.netsvc: ValueError
The class search_base has to have a _name attribute
> /usr/lib/pymodules/python2.7/openerp/osv/orm.py(967)__init__()
-> raise except_orm('ValueError', msg)
Tagged ,

Skippng Python unit tests if a dependency is missing

Update: this is a flawed method of skipping the tests, I’ve written up an improved version based on the feedback I received.

In all the examples of using python unittest to skip tests I’ve not seen anyone explaining how to skip tests if a library isn’t installed.

The simplest way to do it appears to be to simply try to import the relevant libraries and catch the exception thrown if the library isn’t there.

import unittest
try:
    import PyQt4.QtCore
    import PyQt4.QtGui
except:
    raise unittest.SkipTest("Need PyQt4 installed to do gui tests")

This example at the top of a test file simply skips all the tests if PyQt4 isn’t available.

Tagged

Perlbrew with options like threading

This blog post on the *n*x blog gives a great description of how to install perl with threading, something that you need to do if you want to run padre.

The only thing I’d add that you can use the –as to install perl with an alias. This is useful if you want to build a threaded version of a perl you already have installed. You can simply do,

perlbrew install perl-5.14.2 -Dusethreads -Duselargefiles -Dcccdlflags=-fPIC -Doptimize=-O2 -Duseshrplib -Duse64bitall -Darchname=x86_64-linux-gnu -Dccflags=-DDEBIAN --as threaded-perl-5.14.2

(Note that I’ve customised this for my Ubuntu 64 bit os).

Tagged

Perl debugger

Since writing my initial post on my settings for the Perl debugger I’ve found another setting that I’ve found to be invaluable. If you create a file, ~/.perldb it will load your settings each time, mine now has two lines on whatever machine I’m using,

$DB::deep = 1000;
parse_options('dumpDepth=2');

The dumpDepth=2 trick is something I picked up from Chisel’s blog post on the subject and simply limits your x to a depth of 2 by default. This makes life a lot simpler and means I get caught out by accidentally dumping the entire state of an application less often.

Tagged

Faking it with Test::WWW::Mechanize::PSGI

Or should that be getting real?  The Test::WWW::Mechanize::Catalyst module has a really handy feature, with the CATALYST_SERVER environment variable you can set to point your tests at a real live server.  This is really handy for a couple of things.  One of those being monitoring real traffic for those cases where you can’t quite decide whether it’s your test or your server that’s broken.  You can fire up Wireshark and watch the actual traffic going over the wire.

With Test:::WWW:Mechanize::PSGI there doesn’t appear to be that option, and Test::WWW::Mechanize and the modules it wrap don’t really appear to have any simple way to provide that.  A look at the T:W:M:Catalyst module suggests it’s actually a nice chunk of work that they did to implement that feature.  Since I’m lazy and I wanted to solve a problem I came up with a simple way to fake it for now.  I converted the urls from simple /path to proper qualified http://localhost:5000/path urls in my tests then added a simple bit of code to flip between Test::WWW:Mechanize and Test::WWW::Mechanize::PSGI.

my $mech;
if($ENV{EXTERNAL_SERVER})
{
    $mech = Test::WWW::Mechanize->new;
}
else
{
    my $app = Plack::Util::load_psgi 'app.psgi';
    $mech = Test::WWW::Mechanize::PSGI->new( app => $app );
}

Now if I run the tests with EXTERNAL_SERVER=1 it goes to a real server rather than straight to code.  That means I can listen on the loopback adaptor in Wireshark and see what’s actually going over the wire simply.  It’s not as neat as the CATALYST_SERVER feature, but it does for now.

Tagged

POST testing JSON REST API’s with WWW::Mechanize

Having just read the article on POST and PUT in REST API’s I realised I’d goofed a couple of my operations on one of my API’s.

I have tests and this is Perl so how hard can it be to convert over? With Catalyst::Action::REST indeed it is pretty simple to convert my calls, in fact it’s a case of changing the word PUT to POST in some of my function names. It’s the tests where things got interesting. I’m using Test::WWW::Mechanize variants to do my testing because it’s nice and simple. Unfortunately switching from put_ok to post_ok didn’t produce the desired results. When it came to the API it wasn’t reading the data at all. A bit of digging revealed that the post_ok call encoded the parameters in the application/x-www-form-urlencoded style before posting, where as the put_ok call just passed the json through raw.

Some digging into the Catalyst::Action::REST module revealed that they may well have had a similar issue because they created a little helper module called Test::Rest (not to be confused with Test::Rest on CPAN) which created the requests by hand for use in the test suite. Of course they may have simply been avoiding dependencies, and just been lucky to avoid the magic.

I didn’t manage to figure out a way to turn it off so in the end I did a similar thing. The fix for my test suite was to create a simple sub like this that rolled my own POST request with out any magic, then to call $mech->request to simply pass the request through like I was already doing with the DELETE’s. It’s basically a dumbed down version of a method from the Test::Rest from Catalyst::Action::REST.

use HTTP::Request;

sub construct_post
{
    my $url = shift;
    my $data = shift;

    my $req = HTTP::Request->new( "POST" => $url );
    $req->content_type( 'application/json' );
    $req->content_length(
        do { use bytes; length( $data ) }
    );
    $req->content( $data );
    return $req;
}
Tagged

Catalyst and Plack testing

Since Catalyst has switched to Plack for it’s underlying engine it’s opened up lots of funky new possibilities. You can move parts of your infrastructure outside of your catalyst app, while still making use of the catalyst configuration, and still keeping it in the code for the project.

When it comes to testing it does not appear that the Catalyst::Test and Test::WWW::Mechanize::Catalyst modules automatically pick up your .psgi file when building the test server. This might be a feature for some tests, but sometimes you’ll definitely want to test the whole lot together. Luckily that’s fairly simple. When it comes to replacing Test::WWW::Mechanize::Catalyst you can switch to PSGI instead of Catalyst. Probably the closest thing to Catalyst::Test is Plack::Test. Both test modules require you to provide a $app object which Plack::Util makes it easy to load from your existing .psgi file.

Here’s a really simple test converted over to use a .psgi file.

use Test::Most;
use Test::WWW::Mechanize::PSGI;
use Plack::Util;

my $app = Plack::Util::load_psgi 'app.psgi';
my $mech = Test::WWW::Mechanize::PSGI->new( app => $app );

$mech->get_ok('/');

done_testing();
Tagged , ,

Adding new reports to OpenERP

Creating new modules for Open ERP is pretty simple. Here is how to package up some new reports into a module.

In this example I’ve created another report to use with sales orders. This is based on the sales order report so in actual fact I use some of the code and xml from that original report to build it up, then tweak them to produce the collection docket I want. I’m not actually going to reproduce the report here, since that’s a trivial thing to customise. The interesting thing is really how the module is packaged together and how to spot mistakes in the packaging.

The files/folders in the zip

module/__openerp__.py                # this contains the module info
module/report/collection_docket.rml  # the report
module/report/sale_order.py          # sets up the report parser
module/report/__init__.py            # just loads the code
module/reports.xml                   # registers the report
module/__init__.py                   # loads the code

__openerp__.py

{
   'name': 'Extra Sales Reports',
   'version': '0.01',
   'category': 'Extra reports for sales',
   'description': """
   The extra sales reports needed for our project,

   * Reports
     - Collection Docket

   """,
   'author': 'OpusVL',
   'website': 'http://www.opusvl.com',
   'depends': ['stock', 'procurement', 'board', 'sale'],
   'init_xml': [],
   'update_xml': [
       'reports.xml',
   ],
   'demo_xml': [],
   'test': [],
   'installable': True,
   'active': False,
}

__init.py__

import report

reports.xml

<?xml version="1.0" encoding="utf-8"?>
<openerp>
    <data>
        <report auto="False" id="collection_docket" model="sale.order" name="sale.collection_docket" rml="module/report/collection_docket.rml" string="Collection Docket" />
    </data>
</openerp>

report/__init__.py

import sale_order

report/sale_order.py

from report import report_sxw
import time


# this bit is basically a copy of the stuff
# in the regular sale order module.
# I can't just import that code because it causes
# the sale.order report to get registered again
# causing it to complain.
# otherwise I’d do this - from addons.sale.report import order 
class order(report_sxw.rml_parse):
    def __init__(self, cr, uid, name, context=None):
        super(order, self).__init__(cr, uid, name, context=context)
        self.localcontext.update({
            'time': time,
        })


report_sxw.report_sxw('report.sale.collection_docket', 'sale.order', 'addons/module/report/collection_docket.rml', parser=order, header="external")

report/collection_docket.rml

<?xml version="1.0"?>
<document filename="Sale Order.pdf">
... this is a copy of the addons/sale/report/sale_order.rml customised as necessary.

The module is then zipped up for distribution in a regular .zip file. This can either be imported directly into OpenERP or it can be unzipped manually into the addons directory.

Installation of the module via the OpenERP client

  1. Go to Administration->Modules,
  2. Select Import module and select the zip [1].
  3. Select the module and mark it for install.
  4. Now restart OpenERP server.
  5. Now go back to the client and schedule the install of the module.

If you install your module this way you will actually find that the module is left in it’s zip file and the OpenERP server simply reads the files from the zip as if they were an extension of the addons directory.

Manual installation

  1. Find the addons directory
  2. Unzip the module into it.
  3. Restart the openerp server
  4. Go to Administration -> Modules
  5. Select ‘Update Module List’.
  6. Find the module and schedule it for install.

Use

The report can now be used programmatically using the standard report method and referencing it as sale.collection_docket. Alternatively you’ll find that a button has appeared on the sales order screen in the OpenERP client that allows you to print the collection docket alongside the button for printing the regular sales order report.

Trouble shooting

ZipImportError: bad local file header in /usr/share/pyshared/openerp-server/addons/myextra_reports.
zip

This normally indicates it is time to restart the server. If you have just imported the zip file of the module and tried to install it straight away you will often get this error.

ERROR:web-services:[01]: Exception: Report /usr/share/pyshared/openerp-server/addons/myextra_report/report/collection_docket.rml doesn’t exist or deleted :

This is generally caused by a typo in the parser python file where the report is registered with the report_sxw.report_sxw call.

ERROR:service:This service does not exist: ‘report.sale.collection_docket’
ERROR:web-services:[07]: KeyError: ‘report.sale.collection_docket’

The report hasn’t been registered. Has your module been installed and loaded against the current database? Remember that installing the module into the OpenERP server and installing it against your current database are two seperate steps.

ERROR:web-services:[01]: Exception: Start tag expected, ‘<' not found, line 1, column 1

This can be caused by a bad filename registered using the xml file. The path to the file should be in relation to the addons path. In other words, if the full path is /usr/share/pyshared/openerp-server/addons/module/report/docket.rml, the relative filename you need is module/report/docket.rml.
http://www.openerp.com/forum/topic26308.html

The alternative cause of this problem is that there is a unicode BOM indicator at the start of the file. One of the support entries appears to indicate that that will cause the parser to dislike the document.

https://bugs.launchpad.net/openobject-server/+bug/694409

[1] If you get a permissions error that’s normally because the addons
directory isn’t writeable by the user that openerp is running as.

Further reading

The OpenERP documentation regarding reports.

Tagged
Follow

Get every new post delivered to your Inbox.

Join 64 other followers