If you want to call all your functions in an automatic way, specifically, all at once, then it isn't important that you be able to call them by some specific name like it was in the question you are referencing. You can just keep the functions in a list and call each one in turn.
As for dealing with a variable number of arguments, Python has a "splat" operator you may have seen in some method declarations: def __init__(self, *args, **kwargs). This operator can be used to unpack argument lists. (See this SO answer for more info)
If you store the parameters in another list you can iterate through your list of functions and apply the parameters to its corresponding function one-by-one using the same syntax and without specifying the number of arguments:
How you link the functions to their default parameters is another matter: you could keep them in two lists, or together as tuples in another list or dict, or define a lightweight class to hold them. it depends on your problem. Anyways, it sounds like the important part for you is the * notation.
If you want to pass in keyword arguments to the function, you can use the double-* notation to pass a dictionary and have it expanded into the keyword arguments that are required:
answered Mar 3 '14 at 22:39
Actually that's a very good solution thank you but I have a little problem for the arguments my arguments are like these, for first function (n_samples = 1000, noise=0.3, factor=0.5, random_state=1) for second one (n_samples = 1000, ratio = 0.5, noise = 0.1) and if I want to create a list like yours for my arguments it gives me an error and I can not eliminate the left part of argumnts (I mean n_samples, ratio and. ) cause the order of arguments are important do you have any solution for this? – Am1rr3zA Mar 4 '14 at 0:08
I added an edit about keyword arguments. Is this what you were after? – crennie Mar 4 '14 at 0:25
Yes, exactly, thank you – Am1rr3zA Mar 4 '14 at 1:28
OK, let's suppose that you put your arguments in a tuple. You will also need to refine your table so you know how many arguments each function takes, which means a modification of your data structure (you could use the inspect module to determine this, but that's an unnecessary complication at present.:
If the functions have to be called an any specific order, by the way, a dict is not a good structure to use because it linearizes in an unpredictable order, but we'll overlook that for now. Here's a chunk of code that uses * notation to pass a tuple as individual arguments. I am using a reporter function to show you have the argument transmission works.
I hope that gives you plenty to chew on.
As a first possibility, one could try to use the code object for the function by getting the func_code property then query co_argcount of this code object.
Here is a small example that calls the functions stored in a dictionary by name, for a set of input arguments, 2, 3, 4 in this case:
Another slightly less verbose possibility (you don't need to explicitely check for the number of arguments for each function, the functions just use what they need) is to use Python's variable argument syntax. This would look as follows:
answered Mar 3 '14 at 22:45
2016 Stack Exchange, Inc
I came across this problem trying to wrap C code with structs as python classes. The issue seems to be that "special" function including __init__ and __cinit__ must be declared as def rather than cdef. This means that they can be called from normal python, so the type parameters are effectively ignored and everything is treated as object.
In J.F. Sebastian's answer the fix is not the wrapping - a double is a basic numeric type and so there is a default conversion between the C/C++ type and the Python object. Czarek's answer is basically correct - you need to use a fake constructor idiom, using a global function. It is not possible to use a @staticmethod decorator as they cannot be applied to cdef functions. The answer looks simpler on the original example provided.
answered May 1 '14 at 20:33
On recent versions of Cython (as of 0.22 at least), the @staticmethod decorator can be applied on cdef functions, so one can now make the global creator function into a static class one for a neater organisation. – Dologan Apr 1 '15 at 14:40
Bar c++ class is not a basic numeric type and there is no default conversion. – J.F. Sebastian Jun 24 '15 at 15:56
@J.F.Sebastian Could you explain what you mean? Bar does not have to be a basic numeric type in order to store a pointer to it. – Amoss Jun 26 '15 at 6:07
@Amoss: my answer wraps Bar. You said: "In J.F. Sebastian's answer the fix is not the wrapping - a double is a basic numeric type and so there is a default conversion between the C/C++ type and the Python object." My answer is about wrapping. As you said: you can't pass C++ pointer to a def function. – J.F. Sebastian Jun 26 '15 at 11:32
A callable object is an object that can accept some arguments (also called parameters) and possibly return an object (often a tuple containing multiple objects).
A function is the simplest callable object in Python, but there are others, such as classes or certain class instances.Defining Functions Edit
A function is defined in Python by the following format:
If a function takes no arguments, it must still include the parentheses, but without anything in them:
The arguments in the function definition bind the arguments passed at function invocation (i.e. when the function is called), which are called actual parameters, to the names given when the function is defined, which are called formal parameters. The interior of the function has no knowledge of the names given to the actual parameters; the names of the actual parameters may not even be accessible (they could be inside another function).
A function can 'return' a value, for example:
A function can define variables within the function body, which are considered 'local' to the function. The locals together with the arguments comprise all the variables within the scope of the function. Any names within the function are unbound when the function returns or reaches the end of the function body.
You can return multiple values as follows:
Keywords: returning multiple values, multiple return values.Declaring Arguments Edit
When calling a function that takes some values for further processing, we need to send some values as Function Arguments. For example:Default Argument Values Edit
If any of the formal parameters in the function definition are declared with the format "arg = value," then you will have the option of not specifying a value for those arguments when calling the function. If you do not specify a value, then that parameter will have the default value given when the function executes.Variable-Length Argument Lists Edit
Python allows you to declare two special arguments which allow you to create arbitrary-length argument lists. This means that each time you call the function, you can specify any number of arguments above a certain number.
When calling the above function, you must provide value for each of the first two arguments. However, since the third parameter is marked with an asterisk, any actual parameters after the first two will be packed into a tuple and bound to "remaining."
If we declare a formal parameter prefixed with two asterisks, then it will be bound to a dictionary containing any keyword arguments in the actual parameters which do not correspond to any formal parameters. For example, consider the function:
If we call this function with any keyword arguments other than max_length, they will be placed in the dictionary "entries." If we include the keyword argument of max_length, it will be bound to the formal parameter max_length, as usual.By Value and by Reference Edit
Objects passed as arguments to functions are passed by reference ; they are not being copied around. Thus, passing a large list as an argument does not involve copying all its members to a new location in memory. Note that even integers are objects. However, the distinction of by value and by reference present in some other programming languages often serves to distinguish whether the passed arguments can be actually changed by the called function and whether the calling function can see the changes .
Passed objects of mutable types such as lists and dictionaries can be changed by the called function and the changes are visible to the calling function. Passed objects of immutable types such as integers and strings cannot be changed by the called function; the calling function can be certain that the called function will not change them. For mutability, see also Data Types chapter.Preventing Argument Change Edit
An argument cannot be declared to be constant, not to be changed by the called function. If an argument is of an immutable type, it cannot be changed anyway, but if it is of a mutable type such as list, the calling function is at the mercy of the called function. Thus, if the calling function wants to make sure a passed list does not get changed, it has to pass a copy of the list.Calling Functions Edit
A function can be called by appending the arguments in parentheses to the function name, or an empty matched set of parentheses if the function takes no arguments.
A function's return value can be used by assigning it to a variable, like so:
As shown above, when calling a function you can specify the parameters by name and you can do so in any order
This above is valid and start will have the default value of 0. A restriction placed on this is after the first named argument then all arguments after it must also be named. The following is not valid
because the third argument ("my message") is an unnamed argument.
A closure is a nested function with an after-return access to the data of the outer function, where the nested function is returned by the outer function as a function object. Thus, even when the outer function has finished its execution after being called, the closure function returned by it can refer to the values of the variables that the outer function had when it defined the closure function.
Closures are possible in Python because functions are first-class objects. A function is merely an object of type function. Being an object means it is possible to pass a function object (an uncalled function) around as argument or as return value or to assign another name to the function object. A unique feature that makes closure useful is that the enclosed function may use the names defined in the parent function's scope.
A lambda is an anonymous (unnamed) function. It is used primarily to write very short functions that are a hassle to define in the normal way. A function like this:
may also be defined using lambda
Lambda is often used as an argument to other functions that expects a function object, such as sorted()'s 'key' argument.
The lambda form is often useful as a closure, such as illustrated in the following example:
Note that the lambda function can use the values of variables from the scope in which it was created (like pre and post). This is the essence of closure.Generator Functions Edit
When discussing loops, you can across the concept of an iterator. This yields in turn each element of some sequence, rather than the entire sequence at once, allowing you to deal with sequences much larger than might be able to fit in memory at once.
You can create your own iterators, by defining what is known as a generator function. To illustrate the usefulness of this, let us start by considering a simple function to return the concatenation of two lists:
Imagine wanting to do something like concat(range(0, 1000000), range(1000000, 2000000))
That would work, but it would consume a lot of memory.
Consider an alternative definition, which takes two iterators as arguments:
Notice the use of the yield statement, instead of return. We can now use this something like
and print out an awful lot of numbers, without using a lot of memory at all.
You can still pass a list or other sequence type wherever Python expects an iterator (like to an argument of your concat function); this will still work, and makes it easy not to have to worry about the difference where you don’t need to.External Links Edit
In this chapter, we'll look at Boost.Python powered functions in closer detail. We will see some facilities to make exposing C++ functions to Python safe from potential pifalls such as dangling pointers and references. We will also see facilities that will make it even easier for us to expose C++ functions that take advantage of C++ features such as overloading and default arguments.
But before you do, you might want to fire up Python 2.2 or later and type >>> import this.Call Policies
In C++, we often deal with arguments and return types such as pointers and references. Such primitive types are rather, ummmm, low level and they really don't tell us much. At the very least, we don't know the owner of the pointer or the referenced object. No wonder languages such as Java and Python never deal with such low level entities. In C++, it's usually considered a good practice to use smart pointers which exactly describe ownership semantics. Still, even good C++ interfaces use raw references and pointers sometimes, so Boost.Python must deal with them. To do this, it may need your help. Consider the following C++ function:
How should the library wrap this function? A naive approach builds a Python X object around result reference. This strategy might or might not work out. Here's an example where it didn't
What's the problem?
Well, what if f() was implemented as shown below:
The problem is that the lifetime of result X& is tied to the lifetime of y, because the f() returns a reference to a member of the y object. This idiom is is not uncommon and perfectly acceptable in the context of C++. However, Python users should not be able to crash the system just by using our C++ interface. In this case deleting y will invalidate the reference to X. We have a dangling reference.
Here's what's happening:
We could copy result into a new object:
This is not really our intent of our C++ interface. We've broken our promise that the Python interface should reflect the C++ interface as closely as possible.
Our problems do not end there. Suppose Y is implemented as follows:
Notice that the data member z is held by class Y using a raw pointer. Now we have a potential dangling pointer problem inside Y:
For reference, here's the implementation of f again:
Here's what's happening:
Call Policies may be used in situations such as the example detailed above. In our example, return_internal_reference and with_custodian_and_ward are our friends:
What are the 1 and 2 parameters, you ask?
Informs Boost.Python that the first argument, in our case Y& y. is the owner of the returned reference: X&. The "1 " simply specifies the first argument. In short: "return an internal reference X& owned by the 1st argument Y& y ".
Informs Boost.Python that the lifetime of the argument indicated by ward (i.e. the 2nd argument: Z* z ) is dependent on the lifetime of the argument indicated by custodian (i.e. the 1st argument: Y& y ).
It is also important to note that we have defined two policies above. Two or more policies can be composed by chaining. Here's the general syntax:
Here is the list of predefined call policies. A complete reference detailing these can be found here.
"Explicit is better than implicit"
"In the face of ambiguity, refuse the temptation to guess"
So far we have concentrated on making C functions callable from Python. The reverse is also useful: calling Python functions from C. This is especially the case for libraries that support so-called ``callback'' functions. If a C interface makes use of callbacks, the equivalent Python often needs to provide a callback mechanism to the Python programmer; the implementation will require calling the Python callback functions from a C callback. Other uses are also imaginable.
Fortunately, the Python interpreter is easily called recursively, and there is a standard interface to call a Python function. (I won't dwell on how to call the Python parser with a particular string as input -- if you're interested, have a look at the implementation of the "-c " command line option in Python/pythonmain.c from the Python source code.)
Calling a Python function is easy. First, the Python program must somehow pass you the Python function object. You should provide a function (or some other interface) to do this. When this function is called, save a pointer to the Python function object (be careful to Py_INCREF() it!) in a global variable -- or wherever you see fit. For example, the following function might be part of a module definition:
This function must be registered with the interpreter using the METH_VARARGS flag; this is described in section 1.4. ``The Module's Method Table and Initialization Function.'' The PyArg_ParseTuple() function and its arguments are documented in section 1.7. ``Format Strings for PyArg_ParseTuple() .''
The macros Py_XINCREF() and Py_XDECREF() increment/decrement the reference count of an object and are safe in the presence of NULL pointers (but note that temp will not be NULL in this context). More info on them in section 1.10. ``Reference Counts.''
Later, when it is time to call the function, you call the C function PyEval_CallObject(). This function has two arguments, both pointers to arbitrary Python objects: the Python function, and the argument list. The argument list must always be a tuple object, whose length is the number of arguments. To call the Python function with no arguments, pass an empty tuple; to call it with one argument, pass a singleton tuple. Py_BuildValue() returns a tuple when its format string consists of zero or more format codes between parentheses. For example:
PyEval_CallObject() returns a Python object pointer: this is the return value of the Python function. PyEval_CallObject() is ``reference-count-neutral'' with respect to its arguments. In the example a new tuple was created to serve as the argument list, which is Py_DECREF() -ed immediately after the call.
The return value of PyEval_CallObject() is ``new'': either it is a brand new object, or it is an existing object whose reference count has been incremented. So, unless you want to save it in a global variable, you should somehow Py_DECREF() the result, even (especially!) if you are not interested in its value.
Before you do this, however, it is important to check that the return value isn't NULL. If it is, the Python function terminated by raising an exception. If the C code that called PyEval_CallObject() is called from Python, it should now return an error indication to its Python caller, so the interpreter can print a stack trace, or the calling Python code can handle the exception. If this is not possible or desirable, the exception should be cleared by calling PyErr_Clear(). For example:
Depending on the desired interface to the Python callback function, you may also have to provide an argument list to PyEval_CallObject(). In some cases the argument list is also provided by the Python program, through the same interface that specified the callback function. It can then be saved and used in the same manner as the function object. In other cases, you may have to construct a new tuple to pass as the argument list. The simplest way to do this is to call Py_BuildValue(). For example, if you want to pass an integral event code, you might use the following code:
Note the placement of "Py_DECREF(arglist) " immediately after the call, before the error check! Also note that strictly spoken this code is not complete: Py_BuildValue() may run out of memory, and this should be checked.
If what you're trying to do is something like this: And then: The short answer is: "you can't". This is not a Boost.Python limitation so much as a limitation of C++. The problem is that a Python function is actually data, and the only way of associating data with a C++ function pointer is to store it in a static variable of the function. The problem with that is that you can only associate one piece of data with every C++ function, and we have no way of compiling a new C++ function on-the-fly for every Python function you decide to pass to foo. In other words, this could work if the C++ function is always going to invoke the same Python function, but you probably don't want that.
If you have the luxury of changing the C++ code you're wrapping, pass it an object instead and call that; the overloaded function call operator will invoke the Python function you pass it behind the object.
For more perspective on the issue, see this posting.
That exception is protecting you from causing a nasty crash. It usually happens in response to some code like this: And you get:
In this case, the Python method invoked by call_method constructs a new Python object. You're trying to return a reference to a C++ object (an instance of class period ) contained within and owned by that Python object. Because the called method handed back a brand new object, the only reference to it is held for the duration of get_floating_frequency() above. When the function returns, the Python object will be destroyed, destroying the instance of class period. and leaving the returned reference dangling. That's already undefined behavior, and if you try to do anything with that reference you're likely to cause a crash. Boost.Python detects this situation at runtime and helpfully throws an exception instead of letting you do that.
Q:I have an object composed of 12 doubles. A const& to this object is returned by a member function of another class. From the viewpoint of using the returned object in Python I do not care if I get a copy or a reference to the returned object. In Boost.Python Version 2 I have the choice of using copy_const_reference or return_internal_reference. Are there considerations that would lead me to prefer one over the other, such as size of generated code or memory overhead?
A: copy_const_reference will make an instance with storage for one of your objects, size = base_size + 12 * sizeof(double). return_internal_reference will make an instance with storage for a pointer to one of your objects, size = base_size + sizeof(void*). However, it will also create a weak reference object which goes in the source object's weakreflist and a special callback object to manage the lifetime of the internally-referenced object. My guess? copy_const_reference is your friend here, resulting in less overall memory use and less fragmentation, also probably fewer total cycles.How can I wrap functions which take C++ containers as arguments?
Ralf W. Grosse-Kunstleve provides these notes:
This type of C++/Python binding is most suitable for containers that may contain a large number of elements (>10000).
It would also be useful to also have "custom lvalue converters" such as std::vector<> <-> Python list. These converters would support the modification of the Python list from C++. For example:
Python: Custom lvalue converters require changes to the Boost.Python core library and are currently not available.
The "scitbx" files referenced above are available via anonymous CVS:fatal error C1204:Compiler limit:internal structure overflow
Q:I get this error message when compiling a large source file. What can I do?
A: You have two choices:
more_of_my_module.cpp. If you find that a class_ <. > declaration can't fit in a single source file without triggering the error, you can always pass a reference to the class_ object to a function in another source file, and call some of its member functions (e.g. def(. ) ) in the auxilliary source file:How do I debug my Python extensions?
Greg Burley gives the following answer for Unix GCC users:
Once you have created a boost python extension for your c++ library or class, you may need to debug the code. Afterall this is one of the reasons for wrapping the library in python. An expected side-effect or benefit of using BPL is that debugging should be isolated to the c++ library that is under test, given that python code is minimal and boost::python either works or it doesn't. (ie. While errors can occur when the wrapping method is invalid, most errors are caught by the compiler ;-).
The basic steps required to initiate a gdb session to debug a c++ library via python are shown here. Note, however that you should start the gdb session in the directory that contains your BPL my_ext.so module.
Greg's approach works even better using Emacs' " gdb " command, since it will show you each line of source as you step through it.
On Windows. my favorite debugging solution is the debugger that comes with Microsoft Visual C++ 7. This debugger seems to work with code generated by all versions of Microsoft and Metrowerks toolsets; it's rock solid and "just works" without requiring any special tricks from the user.
Raoul Gough has provided the following for gdb on Windows:
gdb support for Windows DLLs has improved lately, so it is now possible to debug Python extensions using a few tricks. Firstly, you will need an up-to-date gdb with support for minimal symbol extraction from a DLL. Any gdb from version 6 onwards, or Cygwin gdb-20030214-1 and onwards should do. A suitable release will have a section in the gdb.info file under Configuration – Native – Cygwin Native – Non-debug DLL symbols. Refer to that info section for more details of the procedures outlined here.
Secondly, it seems necessary to set a breakpoint in the Python interpreter, rather than using ^C to break execution. A good place to set this breakpoint is PyOS_Readline, which will stop execution immediately before reading each interactive Python command. You have to let Python start once under the debugger, so that it loads its own DLL, before you can set the breakpoint:Debugging extensions through Boost.Build
If you are launching your extension module tests with Boost.Build using the boost-python-runtest rule, you can ask it to launch your debugger for you by adding "--debugger=debugger " to your bjam command-line: It can also be extremely useful to add the -d+2 option when you run your test, because Boost.Build will then show you the exact commands it uses to invoke it. This will invariably involve setting up PYTHONPATH and other important environment variables such as LD_LIBRARY_PATH which may be needed by your debugger in order to get things to work right.Why doesn't my *= operator work?
Q:I have exported my class to python, with many overloaded operators. it works fine for me except the *= operator. It always tells me "can't multiply sequence with non int type". If I use p1.__imul__(p2) instead of p1 *= p2 , it successfully executes my code. What's wrong with me?
A: There's nothing wrong with you. This is a bug in Python 2.2. You can see the same effect in Pure Python (you can learn a lot about what's happening in Boost.Python by playing with new-style classes in Pure Python).
To cure this problem, all you need to do is upgrade your Python to version 2.2.1 or later.Does Boost.Python work with Mac OS X?
It is known to work under 10.2.8 and 10.3 using Apple's gcc 3.3 compiler: Under 10.2.8 get the August 2003 gcc update (free at http://connect.apple.com/ ). Under 10.3 get the Xcode Tools v1.0 (also free).
Python 2.3 is required. The Python that ships with 10.3 is fine. Under 10.2.8 use these commands to install Python as a framework: The last command requires root privileges because the target directory is /Library/Frameworks/Python.framework/Versions/2.3. However, the installation does not interfere with the Python version that ships with 10.2.8.
It is also crucial to increase the stacksize before starting compilations, e.g. If the stacksize is too small the build might crash with internal compiler errors.
Sometimes Apple's compiler exhibits a bug by printing an error like the following while compiling a boost::python::class_<your_type> template instantiation: We do not know a general workaround, but if the definition of your_type can be modified the following was found to work in all cases encountered so far:
"I am wrapping a function that always returns a pointer to an already-held C++ object." One way to do that is to hijack the mechanisms used for wrapping a class with virtual functions. If you make a wrapper class with an initial PyObject* constructor argument and store that PyObject* as "self", you can get back to it by casting down to that wrapper type in a thin wrapper function. For example: Of course, if X has no virtual functions you'll have to use static_cast instead of dynamic_cast with no runtime check that it's valid. This approach also only works if the X object was constructed from Python, because X s constructed from C++ are of course never X_wrap objects.
Another approach to this requires you to change your C++ code a bit; if that's an option for you it might be a better way to go. work we've been meaning to get to anyway. When a shared_ptr<X> is converted from Python, the shared_ptr actually manages a reference to the containing Python object. When a shared_ptr<X> is converted back to Python, the library checks to see if it's one of those "Python object managers" and if so just returns the original Python object. So you could just write object(p) to get the Python object back. To exploit this you'd have to be able to change the C++ code you're wrapping so that it deals with shared_ptr instead of raw pointers.
There are other approaches too. The functions that receive the Python object that you eventually want to return could be wrapped with a thin wrapper that records the correspondence between the object address and its containing Python object, and you could have your f_wrap function look in that mapping to get the Python object out.
Part of an API that I'm wrapping goes something like this:
Even binding the lifetime of a to b via with_custodian_and_ward doesn't prevent the python object a from ultimately trying to delete the object it's pointing to. Is there a way to accomplish a 'transfer-of-ownership' of a wrapped C++ object?
Yes: Make sure the C++ object is held by auto_ptr: Then make a thin wrapper function which takes an auto_ptr parameter: Wrap that as B.add. Note that pointers returned via manage_new_object will also be held by auto_ptr. so this transfer-of-ownership will also work correctly.
Please refer to the Reducing Compiling Time section in the tutorial.
Please refer to the Creating Packages section in the tutorial.error C2064: term does not evaluate to a function taking 2 arguments
Niall Douglas provides these notes:
If you see Microsoft Visual C++ 7.1 (MS Visual Studio .NET 2003) issue an error message like the following it is most likely due to a bug in the compiler: This message is triggered by code like the following: The bug is related to the throw() modifier. As a workaround cast off the modifier. E.g.
(The bug has been reported to Microsoft.)How do I handle void * conversion?
Niall Douglas provides these notes:
For several reasons Boost.Python does not support void * as an argument or as a return value. However, it is possible to wrap functions with void * arguments or return values using thin wrappers and the opaque pointer facility. E.g.How can I automatically convert my custom string type to and from a Python string?
Ralf W. Grosse-Kunstleve provides these notes:Below is a small, self-contained demo extension module that shows how to do this. Here is the corresponding trivial test: If you look at the code you will find:
Niall Douglas provides these notes:
If you define custom converters similar to the ones shown above the def_readonly() and def_readwrite() member functions provided by boost::python::class_ for direct access to your member data will not work as expected. This is because def_readonly("bar", &foo::bar) is equivalent to: Similarly, def_readwrite("bar", &foo::bar) is equivalent to: In order to define return value policies compatible with the custom conversions replace def_readonly() and def_readwrite() by add_property(). E.g.Is Boost.Python thread-aware/compatible with multiple interpreters?
Niall Douglas provides these notes:
The quick answer to this is: no.
The longer answer is that it can be patched to be so, but it's complex. You will need to add custom lock/unlock wrapping of every time your code enters Boost.Python (particularly every virtual function override) plus heavily modify boost/python/detail/invoke.hpp with custom unlock/lock wrapping of every time Boost.Python enters your code. You must furthermore take care to not unlock/lock when Boost.Python is invoking iterator changes via invoke.hpp .
There is a patched invoke.hpp posted on the C++-SIG mailing list archives and you can find a real implementation of all the machinery necessary to fully implement this in the TnFOX project at this SourceForge project location .
Revised 28 January, 2004
We can create a function that writes the Fibonacci series to an arbitrary boundary:
The keyword def introduces a function definition. It must be followed by the function name and the parenthesized list of formal parameters. The statements that form the body of the function start at the next line, and must be indented.
The first statement of the function body can optionally be a string literal; this string literal is the function’s documentation string, or docstring. (More about docstrings can be found in the section Documentation Strings .) There are tools which use docstrings to automatically produce online or printed documentation, or to let the user interactively browse through code; it’s good practice to include docstrings in code that you write, so make a habit of it.
The execution of a function introduces a new symbol table used for the local variables of the function. More precisely, all variable assignments in a function store the value in the local symbol table; whereas variable references first look in the local symbol table, then in the local symbol tables of enclosing functions, then in the global symbol table, and finally in the table of built-in names. Thus, global variables cannot be directly assigned a value within a function (unless named in a global statement), although they may be referenced.
The actual parameters (arguments) to a function call are introduced in the local symbol table of the called function when it is called; thus, arguments are passed using call by value (where the value is always an object reference. not the value of the object).  When a function calls another function, a new local symbol table is created for that call.
A function definition introduces the function name in the current symbol table. The value of the function name has a type that is recognized by the interpreter as a user-defined function. This value can be assigned to another name which can then also be used as a function. This serves as a general renaming mechanism:
Coming from other languages, you might object that fib is not a function but a procedure since it doesn’t return a value. In fact, even functions without a return statement do return a value, albeit a rather boring one. This value is called None (it’s a built-in name). Writing the value None is normally suppressed by the interpreter if it would be the only value written. You can see it if you really want to using print :
It is simple to write a function that returns a list of the numbers of the Fibonacci series, instead of printing it:
This example, as usual, demonstrates some new Python features:
It is also possible to define functions with a variable number of arguments. There are three forms, which can be combined.4.7.1. Default Argument Values¶
The most useful form is to specify a default value for one or more arguments. This creates a function that can be called with fewer arguments than it is defined to allow. For example:
This function can be called in several ways:
This example also introduces the in keyword. This tests whether or not a sequence contains a certain value.
The default values are evaluated at the point of function definition in the defining scope, so that
Important warning: The default value is evaluated only once. This makes a difference when the default is a mutable object such as a list, dictionary, or instances of most classes. For example, the following function accumulates the arguments passed to it on subsequent calls:
This will print