- Practical Maya Programming with Python
- Robert Galanakis
- 822字
- 2021-09-03 10:05:26
Improving the performance of PyMEL
I hope at this point, PyMEL's superiority over maya.cmds
has been thoroughly demonstrated. If you have any doubt, feel free to rewrite the examples in this chapter to use maya.cmds
and see how much cleaner the PyMEL versions are.
Having said that, PyMEL can be slower than maya.cmds
. The performance difference can usually be made up by applying one of the following three improvements.
Defining performance
For most scripts, the performance difference is too miniscule to matter. Fast and slow are relative terms. The important metric is fast enough. If your code is fast enough, performance improvements won't really matter. Most of the tools you will write fall under this category. For example, making a pose mirroring tool go 500% faster doesn't matter if it only takes a tenth of a second in the first place.
Refactoring for performance
In most cases where PyMEL code actually needs to be sped up, rewriting small parts can make huge gains. Is your code doing unnecessary work inside a loop? Pull the work out of the loop.
The remove_selected
function in the following example returns a list with any selected objects filtered out of the input list. The list comprehension evaluates pmc.selected()
for every item in the input list. This inefficiency is highlighted in the following example.
>>> objs = pmc.joint(), pmc.joint()
>>> def remove_selected(objs):
... return [item for item in objs
... if item not in pmc.selected()]
>>> pmc.select(objs[0])
>>> remove_selected(objs)
[nt.Joint(u'joint2')]
Instead, we can evaluate pmc.selected()
once, and use that value in the list comprehension. This change is highlighted in the following example.
>>> def remove_selected_faster(objs): ... selected = pmc.selected() ... return [item for item in objs if item not in selected] >>> pmc.select(objs[0]) >>> remove_selected(objs) [nt.Joint(u'joint2')]
Perhaps your code is slow because it is looking up data that can be safely cached. In that case, we can cache the data when it is first calculated and re-use that.
In the following get_type_hierarchy
function, we want to find a MEL type's MEL type hierarchy. To do so, we need to create an instance of the node, invoke the nodeType
method on it to get the hierarchy, delete the node, and return the hierarchy.
>>> def get_type_hierarchy(typename): ... node = pmc.createNode(typename) ... result = node.nodeType(inherited=True) ... pmc.delete(node) ... return result >>> get_type_hierarchy('joint') [u'containerBase', u'entity', u'dagNode', u'transform', u'joint']
Once we have the type hierarchy for a MEL type, we shouldn't need to calculate it again. To make sure we don't perform this unnecessary work, we can cache the result of the calculation, and return the cached value if it exists. This change is highlighted in the following code.
>>> _hierarchy_cache = {} >>> def get_type_hierarchy(typename): ... result = _hierarchy_cache.get(typename) ... if result is None: ... node = pmc.createNode(typename) ... result = node.nodeType(inherited=True) ... pmc.delete(node) ... _hierarchy_cache[typename] = result ... return result >>> get_type_hierarchy('joint') [u'containerBase', u'entity', u'dagNode', u'transform', u'joint']
Or perhaps your code is slow because it is calling a method for each item in a sequence, as the add_influences
function is in the following example.
>>> j1 = pmc.joint() >>> cluster = pmc.skinCluster(j1, pmc.polyCube()[0]) >>> def add_influences(cl, infls): ... for infl in infls: ... cl.addInfluence(infl) >>> add_influences(cluster, [pmc.joint(), pmc.joint()])
Instead of iterating, check and see if the method can take in a list of arguments. We are fortunate that the SkinCluster.addInfluence
method can, so let's remove the for
loop, as highlighted in the following code.
>>> def add_influences(cl, infls):
... cl.addInfluence(infls)
>>> add_influences(cluster, [pmc.joint(), pmc.joint()])
Nearly all of these changes will end up not just making the code faster, but simpler too.
Rewriting inner loops to use maya.cmds
Sometimes, you need to communicate with Maya inside a tight loop or heavily used function. In these cases, PyMEL may actually be too slow if it has to go through several layers of abstraction. You can rewrite the function body to use maya.cmds
while using PyMEL types for the relevant arguments and return value.
If this type of refactoring will help improve performance, you should look at using the Maya API, which is usually even faster. Refer to Chapter 7, Taming the Maya API, for an introduction to the Maya API.
You should also only take this approach when you've identified that the code in question is a bottleneck, and speeding it up will yield significant overall improvement. You can use the standard library's cProfile
module for profiling Python code. There are many resources on the Internet that explain the process in greater detail.
We should pursue high quality and composable code. But if that code takes unacceptably long to run, it matters less how clean it is. On the other hand, we cannot disregard composability and quality for the sake of performance. Code using maya.cmds
will inevitably end up less composable and Pythonic than code using PyMEL. This is because MEL's idioms are very far from Python's. We should contain and limit code using maya.cmds
when we cannot entirely eliminate it.
- scikit-learn Cookbook
- 單片機C語言程序設計實訓100例:基于STC8051+Proteus仿真與實戰
- 深入淺出Prometheus:原理、應用、源碼與拓展詳解
- Java入門很輕松(微課超值版)
- Functional Programming in JavaScript
- Data Analysis with IBM SPSS Statistics
- HTML5入門經典
- Learning Zurb Foundation
- Working with Odoo
- Java編程的邏輯
- Java EE企業級應用開發教程(Spring+Spring MVC+MyBatis)
- Web Developer's Reference Guide
- Node.js實戰:分布式系統中的后端服務開發
- Qt 5.12實戰
- 計算思維與Python編程