python - ipython parallel cluster parallel decorator and higher order functions -


i take existing function (from scikit-learn example: "predict" function), , apply using multiple cores dataset.

my first naive approach:

def parallel_predict(classifier):     @dview.parallel(block=true)     def predict( matrix ):         return classifier.predict(matrix)     return predict 

doesn't work (multiple cores don't start spinning up). there way make work?

or way have "non-iterable" functions passed @dview.parallel function?

couple of thoughts, both based on remote execution doc. i'm used @remote decorator , not @parallel 1 you've used, they'll still apply case. (can't seem that doc load today, reason).

is case remote execution not working because classifier module not accessible on engine? if so, solved adding import statement decorated function explicitly, using with dview.import_sync(): import classifier (as per this example), or adding @require('classifier'): decorator (from same section of doc). far last option, not sure how multiple decorators interact (probably easiest give whack).

the second thought check remote exception(s) (here's doc on that). lot more explicit getting nothing back. example, like:

x = e0.execute('1/0') print x.metadata['error']  x = predict print x.metadata['error'] 

Comments

Popular posts from this blog

Java 3D LWJGL collision -

spring - SubProtocolWebSocketHandler - No handlers -

methods - python can't use function in submodule -