Last update: July, 2024
This guide describes how to create mechanical and biomechanical models in ArtiSynth using its Java API. Detailed information on how to use the ArtiSynth GUI for model visualization, navigation and simulation control is given in the ArtiSynth User Interface Guide. It is also possible to interface ArtiSynth with, or run it under, MATLAB. For information on this, see the guide Interfacing ArtiSynth to MATLAB.
Information on how to install and configure ArtiSynth is given in the installation guides for Windows, MacOS, and Linux.
It is assumed that the reader is familiar with basic Java programming, including variable assignment, control flow, exceptions, functions and methods, object construction, inheritance, and method overloading. Some familiarity with the basic I/O classes defined in java.io.*, including input and output streams and the specification of file paths using File, as well as the collection classes ArrayList and LinkedList defined in java.util.*, is also assumed.
Section 1 offers a general overview of ArtiSynth’s software design, and briefly describes the algorithms used for physical simulation (Section 1.2). The latter section may be skipped on first reading. A more comprehensive overview paper is available online.
The remainder of the manual gives details instructions on how to build various types of mechanical and biomechanical models. Sections 3 and 4 give detailed information about building general mechanical models, involving particles, springs, rigid bodies, joints, constraints, and contact. Section 5 describes how to add control panels, controllers, and input and output data streams to a simulation. Section 6 describes how to incorporate finite element models. The required mathematics is reviewed in Section A.
If time permits, the reader will profit from a top-to-bottom read. However, this may not always be necessary. Many of the sections contain detailed examples, all of which are available in the package artisynth.demos.tutorial and which may be run from ArtiSynth using Models > All demos > tutorials. More experienced readers may wish to find an appropriate example and then work backwards into the text and preceding sections for any needed explanatory detail.
ArtiSynth is an open-source, Java-based system for creating and simulating mechanical and biomechanical models, with specific capabilities for the combined simulation of rigid and deformable bodies, together with contact and constraints. It is presently directed at application domains in biomechanics, medicine, physiology, and dentistry, but it can also be applied to other areas such as traditional mechanical simulation, ergonomic design, and graphical and visual effects.
An ArtiSynth model is composed of a hierarchy of models and model components which are implemented by various Java classes. These may include sub-models (including finite element models), particles, rigid bodies, springs, connectors, and constraints. The component hierarchy may be in turn connected to various agent components, such as control panels, controllers and monitors, and input and output data streams (i.e., probes), which have the ability to control and record the simulation as it advances in time. Agents are presented in more detail in Section 5.
The models and agents are collected together within a top-level component known as a root model. Simulation proceeds under the control of a scheduler, which advances the models through time using a physics simulator. A rich graphical user interface (GUI) allows users to view and edit the model hierarchy, modify component properties, and edit and temporally arrange the input and output probes using a timeline display.
Every ArtiSynth component is an instance of ModelComponent. When connected to the hierarchy, it is assigned a unique number relative to its parent; the parent and number can be obtained using the methods getParent() and getNumber(), respectively. Components may also be assigned a name (using setName()) which is then returned using getName().
A component’s number is not the same as its index. The index gives the component’s sequential list position within the parent, and is always in the range
, where
is the parent’s number of child components. While indices and numbers frequently are the same, they sometimes are not. For example, a component’s number is guaranteed to remain unchanged as long as it remains attached to its parent; this is different from its index, which will change if any preceding components are removed from the parent. For example, if we have a set of components with numbers
0 1 2 3 4 5and components 2 and 4 are then removed, the remaining components will have numbers
0 1 3 5whereas the indices will be 0 1 2 3. This consistency of numbers is why they are used to identify components.
A sub-interface of ModelComponent includes CompositeComponent, which contains child components. A ComponentList is a CompositeComponent which simply contains a list of other components (such as particles, rigid bodies, sub-models, etc.).
Components which contain state information (such as position and velocity) should extend HasState, which provides the methods getState() and setState() for saving and restoring state.
A Model is a sub-interface of CompositeComponent and HasState that contains the notion of advancing through time and which implements this with the methods initialize(t0) and advance(t0, t1, flags), as discussed further in Section 1.1.4. The most common instance of Model used in ArtiSynth is MechModel (Section 1.1.5), which is the top-level container for a mechanical or biomechanical model.
The top-level component in the hierarchy is the root model, which is a subclass of RootModel and which contains a list of models along with lists of agents used to control and interact with these models. The component lists in RootModel include:
models top-level models of the component hierarchy inputProbes input data streams for controlling the simulation controllers functions for controlling the simulation monitors functions for observing the simulation outputProbes output data streams for observing the simulation
Each agent may be associated with a specific top-level model.
The names and/or numbers of a component and its ancestors can be used to form a component path name. This path has a construction analogous to Unix file path names, with the ’/’ character acting as a separator. Absolute paths start with ’/’, which indicates the root model. Relative paths omit the leading ’/’ and can begin lower down in the hierarchy. A typical path name might be
/models/JawHyoidModel/axialSprings/lad
For nameless components in the path, their numbers can be used instead. Numbers can also be used for components that have names. Hence the path above could also be represented using only numbers, as in
/0/0/1/5
although this would most likely appear only in machine-generated output.
ArtiSynth simulation proceeds by advancing all of the root model’s top-level models through a sequence of time steps. Every time step is achieved by calling each model’s advance() method:
This method advances the model from time t0 to time t1, performing whatever physical simulation is required (see Section 1.2). The method may optionally return a StepAdjustment indicating that the step size (t1 - t0) was too large and that the advance should be redone with a smaller step size.
The root model has it’s own advance(), which in turn calls the advance method for all of the top-level models, in sequence. The advance of each model is surrounded by the application of whatever agents are associated with that model. This is done by calling the agent’s apply() method:
Agents not associated with a specific model are applied before (or after) the advance of all other models.
More precise details about model advancement are given in the ArtiSynth Reference Manual.
Most ArtiSynth applications contain a single top-level model which is an instance of MechModel. This is aCompositeComponent that may (recursively) contain an arbitrary number of mechanical components, including finite element models, other MechModels, particles, rigid bodies, constraints, attachments, and various force effectors. The MechModel advance() method invokes a physics simulator that advances these components forward in time (Section 1.2).
For convenience each MechModel contains a number of predefined containers for different component types, including:
particles 3 DOF particles points other 3 DOF points rigidBodies 6 DOF rigid bodies frames other 6 DOF frames axialSprings point-to-point springs connectors joint-type connectors between bodies constrainers general constraints forceEffectors general force-effectors attachments attachments between dynamic components renderables renderable components (for visualization only)
Each of these is a child component of MechModel and is implemented as a ComponentList. Special methods are provided for adding and removing items from them. However, applications are not required to use these containers, and may instead create any component containment structure that is appropriate. If not used, the containers will simply remain empty.
Only a brief summary of ArtiSynth physics simulation is described here. Full details are given in [11] and in the related overview paper.
For purposes of physics simulation, the components of a MechModel are grouped as follows:
Components, such as a
particles and rigid bodies, that contain position and velocity state,
as well as mass. All dynamic components are instances of
the Java interface
DynamicComponent.
Components, such as springs or finite elements,
that exert forces between dynamic components.
All force effectors are instances of the Java interface
ForceEffector.
Components that enforce constraints between dynamic components.
All constrainers are instances of the Java interface
Constrainer.
Attachments between dynamic components. While technically these
are constraints, they are implemented using a different approach.
All attachment components are instances of
DynamicAttachment.
The positions, velocities, and forces associated with all the
dynamic components are denoted by the composite vectors
,
, and
.
In addition, the composite mass matrix is given by
.
Newton’s second law then gives
![]() |
(1.1) |
where the accounts for various “fictitious” forces.
Each integration step involves solving for
the velocities at time step
given the velocities and forces
at step
. One way to do this is to solve the expression
![]() |
(1.2) |
for , where
is the step size and
. Given the updated velocities
, one can
determine
from
![]() |
(1.3) |
where accounts for situations (like rigid bodies) where
, and then solve for the updated positions using
![]() |
(1.4) |
(1.2) and (1.4) together comprise a simple symplectic Euler integrator.
In addition to forces, bilateral and unilateral constraints give rise to
locally linear constraints on of the form
![]() |
(1.5) |
Bilateral constraints may include rigid body joints, FEM
incompressibility, and point-surface constraints, while unilateral
constraints include contact and joint limits. Constraints give rise
to constraint forces (in the directions and
)
which supplement the forces of (1.1) in order to enforce
the constraint conditions. In addition, for unilateral constraints,
we have a complementarity condition in which
implies no
constraint force, and a constraint force implies
. Any
given constraint usually involves only a few dynamic components and so
and
are generally sparse.
Adding constraints to the velocity solve (1.2) leads to a mixed linear complementarity problem (MLCP) of the form
![]() |
|||
![]() |
(1.6) |
where is a slack variable,
and
give the force
constraint impulses over the time step, and
and
are
derivative terms defined by
![]() |
(1.7) |
to account for time variations in and
.
In addition,
and
are
and
augmented with stiffness
and damping terms terms to accommodate implicit integration, which
is often required for problems involving deformable bodies.
The actual constraint forces
and
can be determined
by dividing the impulses by the time step
:
![]() |
(1.8) |
We note here that ArtiSynth uses a full coordinate formulation, in which the position of each dynamic body is solved using full, or unconstrained, coordinates, with constraint relationships acting to restrict these coordinates. In contrast, some other simulation systems, including OpenSim [7], use reduced coordinates, in which the system dynamics are formulated using a smaller set of coordinates (such as joint angles) that implicitly take the system’s constraints into account. Each methodology has its own advantages. Reduced formulations yield systems with fewer degrees of freedom and no constraint errors. On the other hand, full coordinates make it easier to combine and connect a wide range of components, including rigid bodies and FEM models.
Attachments between components can be implemented by constraining the velocities of the attached components using special constraints of the form
![]() |
(1.9) |
where and
denote the velocities of the attached and
non-attached components. The constraint matrix
is
sparse, with a non-zero block entry for each master component to
which the attached component is connected. The simplest case involves
attaching a point
to another point
, with the simple velocity relationship
![]() |
(1.10) |
That means that has a single entry of
(where
is the
identity matrix) in the
-th block column.
Another common case involves connecting a point
to
a rigid frame
. The velocity relationship for this is
![]() |
(1.11) |
where and
are the translational and rotational
velocity of the frame and
is the location of the point relative
to the frame’s origin (as seen in world coordinates). The corresponding
contains a single
block entry of the form
![]() |
(1.12) |
in the block column, where
![]() |
(1.13) |
is a skew-symmetric cross product matrix.
The attachment constraints
could be added directly to
(1.6), but their special form allows us to
explicitly solve for
, and hence reduce the size of
(1.6), by factoring out the attached velocities
before solution.
The MLCP (1.6) corresponds to a single step integrator. However, higher order integrators, such as Newmark methods, usually give rise to MLCPs with an equivalent form. Most ArtiSynth integrators use some variation of (1.6) to determine the system velocity at each time step.
To set up (1.6), the MechModel component
hierarchy is traversed and the methods of the different component
types are queried for the required values. Dynamic components (type
DynamicComponent) provide ,
, and
; force effectors
(ForceEffector) determine
and the stiffness/damping
augmentation used to produce
; constrainers (Constrainer) supply
,
,
and
, and attachments (DynamicAttachment) provide the information needed to factor out
attached velocities.
The core code of the ArtiSynth project is divided into three main packages, each with a number of sub-packages.
The packages under maspack contain general computational utilities that are independent of ArtiSynth and could be used in a variety of other contexts. The main packages are:
The packages under artisynth.core contain the core code for ArtiSynth model components and its GUI infrastructure.
These packages contain demonstration models that illustrate ArtiSynth’s modeling capabilities:
ArtiSynth components expose properties, which provide a uniform interface for accessing their internal parameters and state. Properties vary from component to component; those for RigidBody include position, orientation, mass, and density, while those for AxialSpring include restLength and material. Properties are particularly useful for automatically creating control panels and probes, as described in Section 5. They are also used for automating component serialization.
Properties are described only briefly in this section; more detailed descriptions are available in the Maspack Reference Manual and the overview paper.
The set of properties defined for a component is fixed for that component’s class; while property values may vary between component instances, their definitions are class-specific. Properties are exported by a class through code contained in the class definition, as described in Section 5.2.
Each property has a unique name that can be used to access its value interactively in the GUI. This can be done either by using a custom control panel (Section 5.1) or by selecting the component and choosing Edit properties ... from the right-click context menu).
Properties can also be accessed in code using their set/get accessor methods. Unless otherwise specified, the names for these are formed by simply prepending set or get to the property’s name. More specifically, a property with the name foo and a value type of Bar will usually have accessor signatures of
A property’s name can also be used to obtain a property handle through which its value may be queried or set generically. Property handles are implemented by the class Property and are returned by the component’s getProperty() method. getProperty() takes a property’s name and returns the corresponding handle. For example, components of type Muscle have a property excitation, for which a handle may be obtained using a code fragment such as
Property handles can also be obtained for subcomponents, using a property path that consists of a path to the subcomponent followed by a colon ‘:’ and the property name. For example, to obtain the excitation property for a subcomponent located by axialSprings/lad relative to a MechModel, one could use a call of the form
Composite properties are possible, in which a property value is a composite object that in turn has subproperties. A good example of this is the RenderProps class, which is associated with the property renderProps for renderable objects and which itself can have a number of subproperties such as visible, faceStyle, faceColor, lineStyle, lineColor, etc.
Properties can be declared to be inheritable, so that their values can be inherited from the same properties hosted by ancestor components further up the component hierarchy. Inheritable properties require a more elaborate declaration and are associated with a mode which may be either Explicit or Inherited. If a property’s mode is inherited, then its value is obtained from the closest ancestor exposing the same property whose mode is explicit. In Figure (1.1), the property stiffness is explicitly set in components A, C, and E, and inherited in B and D (which inherit from A) and F (which inherits from C).
ArtiSynth applications are created by writing and compiling an application model that is a subclass of RootModel. This application-specific root model is then loaded and run by the ArtiSynth program.
The code for the application model should:
Declare a no-args constructor
Override the RootModel build() method to construct the application.
ArtiSynth can load a model either using the build method or by reading it from a file:
ArtiSynth creates an instance of the model using the no-args constructor, assigns it a name (which is either user-specified or the simple name of the class), and then calls the build() method to perform the actual construction.
ArtiSynth creates an instance of the model using the no-args constructor, and then the model is named and constructed by reading the file.
The no-args constructor should perform whatever initialization is required in both cases, while the build() method takes the place of the file specification. Unless a model is originally created using a file specification (which is very tedious), the first time creation of a model will almost always entail using the build() method.
The general template for application model code looks like this:
Here, the model itself is called MyModel, and is defined in the (hypothetical) package artisynth.models.experimental (placing models in the super package artisynth.models is common practice but not necessary).
Note: The build() method was only introduced in ArtiSynth 3.1. Prior to that, application models were constructed using a constructor taking a String argument supplying the name of the model. This method of model construction still works but is deprecated.
As mentioned above, the build() method is responsible for actual model construction. Many applications are built using a single top-level MechModel. Build methods for these may look like the following:
First, a MechModel is created (with the name "mech" in this example, although any name, or no name, may be given) and added to the list of models in the root model using the addModel() method. Subsequent code then creates and adds the components required by the MechModel, as described in Sections 3, 4 and 6. The build() method also creates and adds to the root model any agents required by the application (controllers, probes, etc.), as described in Section 5.
When constructing a model, there is no fixed order in which components need to be added. For instance, in the above example, addModel(mech) could be called near the end of the build() method rather than at the beginning. The only restriction is that when a component is added to the hierarchy, all other components that it refers to should already have been added to the hierarchy. For instance, an axial spring (Section 3.1) refers to two points. When it is added to the hierarchy, those two points should already be present in the hierarchy.
The build() method supplies a String array as an argument, which can be used to transmit application arguments in a manner analogous to the args argument passed to static main() methods. Build arguments can be specified when a model is loaded directly from a class using Models > Load from class ..., or when the startup model is set to automatically load a model when ArtiSynth is first started (Settings > Startup model). Details are given in the “Loading, Simulating and Saving Models” section of the User Interface Guide.
Build arguments can also be listed directly on the ArtiSynth command line when specifying a model to load using the -model <classname> option. This is done by enclosing the desired arguments within square brackets [ ] immediately following the -model option. So, for example,
> artisynth -model projects.MyModel [ -size 50 ]
would cause the strings "-size" and "50" to be passed to the build() method of MyModel.
In order to load an application model into ArtiSynth, the classes associated with its implementation must be made visible to ArtiSynth. This usually involves adding the top-level class folder associated with the application code to the classpath used by ArtiSynth.
The demonstration models referred to in this guide belong to the package artisynth.demos.tutorial and are already visible to ArtiSynth.
In most current ArtiSynth projects, classes are stored in a folder tree separate from the source code, with the top-level class folder named classes, located one level below the project root folder. A typical top-level class folder might be stored in a location like this:
/home/joeuser/artisynthProjects/classes
In the example shown in Section 1.5, the model was created in the package artisynth.models.experimental. Since Java classes are arranged in a folder structure that mirrors package names, with respect to the sample project folder shown above, the model class would be located in
/home/joeuser/artisynthProjects/classes/artisynth/models/experimental
At present there are three ways to make top-level class folders known to ArtiSynth:
If you are using the Eclipse IDE, then you can add the project in which are developing your model code to the launch configuration that you use to run ArtiSynth. Other IDEs will presumably provide similar functionality.
You can explicitly add the class folders to ArtiSynth’s external classpath. The easiest way to do this is to select “Settings > External classpath ...” from the Settings menu, which will open an external classpath editor which lists all the classpath entries in a large panel on the left. (When ArtiSynth is first installed, the external classpath has no entries, and so this panel will be blank.) Class folders can then by added via the “Add class folder” button, and the classpath is saved using the Save button.
If you are running ArtiSynth from the command line, using the artisynth command (or artisynth.bat on Windows), then you can define a CLASSPATH environment variable in your environment and add the needed folders to this.
If a model’s classes are visible to ArtiSynth, then it may be loaded into ArtiSynth in several ways:
If the root model is contained in a package located under artisynth.demos or artisynth.models, then it will appear in the default model menu (Models in the main menu bar) under the submenu All demos or All models.
A model may also be loaded by choosing “Load from class ...” from the Models menu and specifying its package name and then choosing its root model class. It is also possible to use the -model <classname> command line argument to have a model loaded directly into ArtiSynth when it starts up.
If a model has been saved to a .art file, it may be loaded from that file by choosing File > Load model ....
These methods are described in detail in the section “Loading and Simulating Models” of the ArtiSynth User Interface Guide.
The demonstration models referred to in this guide should already be present in the model menu and may be loaded from the submenu Models > All demos > tutorial.
Once a model is loaded, it can be simulated, or run. Simulation of the model can then be started, paused, single-stepped, or reset using the play controls (Figure 1.2) located at the upper right of the ArtiSynth window frame. Starting and stopping a simulation is done by clicking play/pause, while reset resets the simulation to time 0. The single-step button advances the simulation by one time step. The stop-all button will also stop the simulation, along with any Jython commands or scripts that are running.
Comprehensive information on exploring and interacting with models is given in the ArtiSynth User Interface Guide.
ArtiSynth uses a large number of supporting classes, mostly defined in the super package maspack, for handling mathematical and geometric quantities. Those that are referred to in this manual are summarized in this section.
Among the most basic classes are those used to implement vectors and matrices, defined in maspack.matrix. All vector classes implement the interface Vector and all matrix classes implement Matrix, which provide a number of standard methods for setting and accessing values and reading and writing from I/O streams.
General sized vectors and matrices are implemented by VectorNd and MatrixNd. These provide all the usual methods for linear algebra operations such as addition, scaling, and multiplication:
As illustrated in the above example, vectors and matrices both provide a toString() method that allows their elements to be formatted using a C-printf style format string. This is useful for providing concise and uniformly formatted output, particularly for diagnostics. The output from the above example is
result= 4.000 12.000 12.000 24.000 20.000
Detailed specifications for the format string are provided in the documentation for NumberFormat.set(String). If either no format string, or the string "%g", is specified, toString() formats all numbers using the full-precision output provided by Double.toString(value).
For computational efficiency, a number of fixed-size vectors and matrices are also provided. The most commonly used are those defined for three dimensions, including Vector3d and Matrix3d:
maspack.matrix contains a number classes that implement rotation matrices, rigid transforms, and affine transforms.
Rotations (Section A.1) are commonly described using a RotationMatrix3d, which implements a rotation matrix and contains numerous methods for setting rotation values and transforming other quantities. Some of the more commonly used methods are:
Rotations can also be described by AxisAngle, which characterizes a rotation as a single rotation about a specific axis.
Rigid transforms (Section A.2) are used by ArtiSynth to describe a rigid body’s pose, as well as its relative position and orientation with respect to other bodies and coordinate frames. They are implemented by RigidTransform3d, which exposes its rotational and translational components directly through the fields R (a RotationMatrix3d) and p (a Vector3d). Rotational and translational values can be set and accessed directly through these fields. In addition, RigidTransform3d provides numerous methods, some of the more commonly used of which include:
Affine transforms (Section A.3) are used by ArtiSynth to effect scaling and shearing transformations on components. They are implemented by AffineTransform3d.
Rigid transformations are actually a specialized form of affine transformation in which the basic transform matrix equals a rotation. RigidTransform3d and AffineTransform3d hence both derive from the same base class AffineTransform3dBase.
The rotations and transforms described above can be used to transform both vectors and points in space.
Vectors are most commonly implemented using Vector3d, while points can be implemented using the subclass Point3d. The only difference between Vector3d and Point3d is that the former ignores the translational component of rigid and affine transforms; i.e., as described in Sections A.2 and A.3, a vector v has an implied homogeneous representation of
![]() |
(2.1) |
while the representation for a point p is
![]() |
(2.2) |
Both classes provide a number of methods for applying rotational and affine transforms. Those used for rotations are
where R is a rotation matrix and v1 is a vector (or a point in the case of Point3d).
The methods for applying rigid or affine transforms include:
where X is a rigid or affine transform. As described above, in the case of Vector3d, these methods ignore the translational part of the transform and apply only the matrix component (R for a RigidTransform3d and A for an AffineTransform3d). In particular, that means that for a RigidTransform3d given by X and a Vector3d given by v, the method calls
produce the same result.
The velocities, forces and inertias associated with 3D coordinate frames and rigid bodies are represented using the 6 DOF spatial quantities described in Sections A.5 and A.6. These are implemented by classes in the package maspack.spatialmotion.
Spatial velocities (or twists) are implemented by Twist, which exposes its translational and angular velocity components through the publicly accessible fields v and w, while spatial forces (or wrenches) are implemented by Wrench, which exposes its translational force and moment components through the publicly accessible fields f and m.
Both Twist and Wrench contain methods for algebraic operations such as addition and scaling. They also contain transform() methods for applying rotational and rigid transforms. The rotation methods simply transform each component by the supplied rotation matrix. The rigid transform methods, on the other hand, assume that the supplied argument represents a transform between two frames fixed within a rigid body, and transform the twist or wrench accordingly, using either (A.27) or (A.29).
The spatial inertia for a rigid body is implemented by SpatialInertia, which contains a number of methods for setting its value given various mass, center of mass, and inertia values, and querying the values of its components. It also contains methods for scaling and adding, transforming between coordinate systems, inversion, and multiplying by spatial vectors.
ArtiSynth makes extensive use of 3D meshes, which are defined in maspack.geometry. They are used for a variety of purposes, including visualization, collision detection, and computing physical properties (such as inertia or stiffness variation within a finite element model).
A mesh is essentially a collection of vertices (i.e., points) that are topologically connected in some way. All meshes extend the abstract base class MeshBase, which supports the vertex definitions, while subclasses provide the topology.
Through MeshBase, all meshes provide methods for adding and accessing vertices. Some of these include:
Vertices are implemented by Vertex3d, which defines the position of the vertex (returned by the method getPosition()), and also contains support for topological connections. In addition, each vertex maintains an index, obtainable via getIndex(), that equals the index of its location within the mesh’s vertex list. This makes it easy to set up parallel array structures for augmenting mesh vertex properties.
Mesh subclasses currently include:
Implements a 2D surface mesh containing faces implemented using half-edges.
Implements a mesh consisting of connected line-segments (polylines).
Implements a point cloud with no topological connectivity.
PolygonalMesh is used quite extensively and provides a number of methods for implementing faces, including:
The class Face implements a face as a counter-clockwise arrangement of vertices linked together by half-edges (class HalfEdge). Face also supplies a face’s (outward facing) normal via getNormal().
Some mesh uses within ArtiSynth, such as collision detection, require a triangular mesh; i.e., one where all faces have three vertices. The method isTriangular() can be used to check for this. Meshes that are not triangular can be made triangular using triangulate().
Meshes are most commonly created using either one of the factory methods supplied by MeshFactory, or by reading a definition from a file (Section 2.5.5). However, it is possible to create a mesh by direct construction. For example, the following code fragment creates a simple closed tetrahedral surface:
Some of the more commonly used factory methods for creating polyhedral meshes include:
Each factory method creates a mesh in some standard coordinate
frame. After creation, the mesh can be transformed using the
transform(X) method, where X is either a rigid transform (
RigidTransform3d) or a more general affine
transform (AffineTransform3d).
For example, to create a rotated box centered on ,
one could do:
One can also scale a mesh using scale(s), where s is a single scale factor, or scale(sx,sy,sz), where sx, sy, and sz are separate scale factors for the x, y and z axes. This provides a useful way to create an ellipsoid:
MeshFactory can also be used to create new meshes by performing Boolean operations on existing ones:
Meshes provide support for adding normal, color, and texture information, with the exact interpretation of these quantities depending upon the particular mesh subclass. Most commonly this information is used simply for rendering, but in some cases normal information might also be used for physical simulation.
For polygonal meshes, the normal information described here is used only for smooth shading. When flat shading is requested, normals are determined directly from the faces themselves.
Normal information can be set and queried using the following methods:
The method setNormals() takes two arguments: a set of normal vectors (nrmls), along with a set of index values (indices) that map these normals onto the vertices of each of the mesh’s geometric features. Often, there will be one unique normal per vertex, in which case nrmls will have a size equal to the number of vertices, but this is not always the case, as described below. Features for the different mesh subclasses are: faces for PolygonalMesh, polylines for PolylineMesh, and vertices for PointMesh. If indices is specified as null, then normals is assumed to have a size equal to the number of vertices, and an appropriate index set is created automatically using createVertexIndices() (described below). Otherwise, indices should have a size of equal to the number of features times the number of vertices per feature. For example, consider a PolygonalMesh consisting of two triangles formed from vertex indices (0, 1, 2) and (2, 1, 3), respectively. If normals are specified and there is one unique normal per vertex, then the normal indices are likely to be
[ 0 1 2 2 1 3 ]
As mentioned above, sometimes there may be more than one normal per vertex. This happens in cases when the same vertex uses different normals for different faces. In such situations, the size of the nrmls argument will exceed the number of vertices.
The method setNormals() makes internal copies of the specified normal and index information, and this information can be later read back using getNormals() and getNormalIndices(). The number of normals can be queried using numNormals(), and individual normals can be queried or set using getNormal(idx) and setNormal(idx,nrml). All normals and indices can be explicitly cleared using clearNormals().
Color and texture information can be set using analogous methods. For colors, we have
When specified as float[], colors are given as RGB or
RGBA values, in the range , with array lengths of 3 and 4,
respectively. The colors returned by
getColors() are always RGBA
values.
With colors, there may often be fewer colors than the number of vertices. For instance, we may have only two colors, indexed by 0 and 1, and want to use these to alternately color the mesh faces. Using the two-triangle example above, the color indices might then look like this:
[ 0 0 0 1 1 1 ]
Finally, for texture coordinates, we have
When specifying indices using setNormals, setColors, or setTextureCoords, it is common to use the same index set as that which associates vertices with features. For convenience, this index set can be created automatically using
Alternatively, we may sometimes want to create a index set that assigns the same attribute to each feature vertex. If there is one attribute per feature, the resulting index set is called a feature index set, and can be created using
If we have a mesh with three triangles and one color per triangle, the resulting feature index set would be
[ 0 0 0 1 1 1 2 2 2 ]
Note: when a mesh is modified by the addition of new features (such as faces for PolygonalMesh), all normal, color and texture information is cleared by default (with normal information being automatically recomputed on demand if automatic normal creation is enabled; see Section 2.5.3). When a mesh is modified by the removal of features, the index sets for normals, colors and textures are adjusted to account for the removal.
For colors, it is possible to request that a mesh explicitly maintain colors for either its vertices or features (Section 2.5.4). When this is done, colors will persist when vertices or features are added or removed, with default colors being automatically created as necessary.
Once normals, colors, or textures have been set, one may want to know which of these attributes are associated with the vertices of a specific feature. To know this, it is necessary to find that feature’s offset into the attribute’s index set. This offset information can be found using the array returned by
For example, the three normals associated with a triangle at index ti can be obtained using
Alternatively, one may use the convenience methods
which return the attribute values for the -th vertex of
the feature indexed by fidx.
In general, the various get methods return references to internal storage information and so should not be modified. However, specific values within the lists returned by getNormals(), getColors(), or getTextureCoords() may be modified by the application. This may be necessary when attribute information changes as the simulation proceeds. Alternatively, one may use methods such as setNormal(idx,nrml) setColor(idx,color), or setTextureCoords(idx,coords).
Also, in some situations, particularly with colors and textures, it may be desirable to not have color or texture information defined for certain features. In such cases, the corresponding index information can be specified as -1, and the getNormal(), getColor() and getTexture() methods will return null for the features in question.
For some mesh subclasses, if normals are not explicitly set, they are computed automatically whenever getNormals() or getNormalIndices() is called. Whether or not this is true for a particular mesh can be queried by the method
Setting normals explicitly, using a call to setNormals(nrmls,indices), will overwrite any existing normal information, automatically computed or otherwise. The method
will return true if normals have been explicitly set, and false if they have been automatically computed or if there is currently no normal information. To explicitly remove normals from a mesh which has automatic normal generation, one may call setNormals() with the nrmls argument set to null.
More detailed control over how normals are automatically created may be available for specific mesh subclasses. For example, PolygonalMesh allows normals to be created with multiple normals per vertex, for vertices that are associated with either open or hard edges. This ability can be controlled using the methods
Having multiple normals means that even with smooth shading, open or hard edges will still appear sharp. To make an edge hard within a PolygonalMesh, one may use the methods
which control the hardness of edges between individual vertices, specified either directly or using their indices.
The method setColors() makes it possible to assign any desired coloring scheme to a mesh. However, it does require that the user explicitly reset the color information whenever new features are added.
For convenience, an application can also request that a mesh explicitly maintain colors for either its vertices or features. These colors will then be maintained when vertices or features are added or removed, with default colors being automatically created as necessary.
Vertex-based coloring can be requested with the method
This will create a separate (default) color for each of the mesh’s vertices, and set the color indices to be equal to the vertex indices, which is equivalent to the call
where colors contains a default color for each vertex. However, once vertex coloring is enabled, the color and index sets will be updated whenever vertices or features are added or removed. Meanwhile, applications can query or set the colors for any vertex using getColor(idx), or any of the various setColor methods. Whether or not vertex coloring is enabled can be queried using
Once vertex coloring is established, the application will typically want to set the colors for all vertices, perhaps using a code fragment like this:
Similarly, feature-based coloring can be requested using the method
This will create a separate (default) color for each of the mesh’s features (faces for PolygonalMesh, polylines for PolylineMesh, etc.), and set the color indices to equal the feature index set, which is equivalent to the call
where colors contains a default color for each feature. Applications can query or set the colors for any vertex using getColor(idx), or any of the various setColor methods. Whether or not feature coloring is enabled can be queried using
PolygonalMesh, PolylineMesh, and PointMesh all provide constructors that allow them to be created from a definition file, with the file format being inferred from the file name suffix:
Suffix | Format | PolygonalMesh | PolylineMesh | PointMesh |
---|---|---|---|---|
.obj | Alias Wavefront | X | X | X |
.ply | Polygon file format | X | X | |
.stl | STereoLithography | X | ||
.gts | GNU triangulated surface | X | ||
.off | Object file format | X | ||
.vtk | VTK ascii format | X | ||
.vtp | VTK XML format | X | X |
The currently supported file formats, and their applicability to the different mesh types, are given in Table 2.1. For example, a PolygonalMesh can be read from either an Alias Wavefront .obj file or an .stl file, as show in the following example:
The file-based mesh constructors may throw an I/O exception if an I/O error occurs or if the indicated format does not support the mesh type. This exception must either be caught, as in the example above, or thrown out of the calling routine.
In addition to file-based constructors, all mesh types implement read and write methods that allow a mesh to be read from or written to a file, with the file format again inferred from the file name suffix:
For the latter methods, the argument zeroIndexed specifies zero-based vertex indexing in the case of Alias Wavefront .obj files, while fmtStr is a C-style format string specifying the precision and style with which the vertex coordinates should be written. (In the former methods, zero-based indexing is false and vertices are written using full precision.)
As an example, the following code fragment writes a mesh as an .stl file:
Sometimes, more explicit control is needed when reading or writing a mesh from/to a given file format. The constructors and read/write methods described above make use of a specific set of reader and writer classes located in the package maspack.geometry.io. These can be used directly to provide more explicit read/write control. The readers and writers (if implemented) associated with the different formats are given in Table 2.2.
Suffix | Format | Reader class | Writer class |
---|---|---|---|
.obj | Alias Wavefront | WavefrontReader | WavefrontWriter |
.ply | Polygon file format | PlyReader | PlyWriter |
.stl | STereoLithography | StlReader | StlWriter |
.gts | GNU triangulated surface | GtsReader | GtsWriter |
.off | Object file format | OffReader | OffWriter |
.vtk | VTK ascii format | VtkAsciiReader | |
.vtp | VTK XML format | VtkXmlReader |
The general usage pattern for these classes is to construct the desired reader or writer with a path to the desired file, and then call readMesh() or writeMesh() as appropriate:
Both readMesh() and writeMesh() may throw I/O exceptions, which must be either caught, as in the example above, or thrown out of the calling routine.
For convenience, one can also use the classes GenericMeshReader or GenericMeshWriter, which internally create an appropriate reader or writer based on the file extension. This enables the writing of code that does not depend on the file format:
Here, fileName can refer to a mesh of any format supported by GenericMeshReader. Note that the mesh returned by readMesh() is explicitly cast to PolygonalMesh. This is because readMesh() returns the superclass MeshBase, since the default mesh created for some file formats may be different from PolygonalMesh.
When writing a mesh out to a file, normal and texture information are also written if they have been explicitly set and the file format supports it. In addition, by default, automatically generated normal information will also be written if it relies on information (such as hard edges) that can’t be reconstructed from the stored file information.
Whether or not normal information will be written is returned by the method
This will always return true if any of the conditions described above have been met. So for example, if a PolygonalMesh contains hard edges, and multiple automatic normals are enabled (i.e., getMultipleAutoNormals() returns true), then getWriteNormals() will return true.
Default normal writing behavior can be overridden within the MeshWriter classes using the following methods:
where enable should be one of the following values:
normals will never be written;
normals will always be written;
normals will written according to the default behavior described above.
When reading a PolygonalMesh from a file, if the file contains normal information with multiple normals per vertex that suggests the existence of hard edges, then the corresponding edges are set to be hard within the mesh.
ArtiSynth contains primitives for performing constructive solid geometry (CSG) operations on volumes bounded by triangular meshes. The class that performs these operations is maspack.collision.SurfaceMeshIntersector, and it works by robustly determining the intersection contour(s) between a pair of meshes, and then using these to compute the triangles that need to be added or removed to produce the necessary CSG surface.
The CSG operations include union, intersection, and difference, and are implemented by the following methods of SurfaceMeshIntersector:
Each takes two PolyhedralMesh objects, mesh0 and mesh1, and creates and returns another PolyhedralMesh which represents the boundary surface of the requested operation. If the result of the operation is null, the returned mesh will be empty.
The example below uses findUnion to create a dumbbell shaped mesh from two balls and a cylinder:
The balls and cylinder are created using the MeshFactory methods createIcosahedralSphere() and createCylinder(), where the latter takes arguments ns, nr, and nh giving the number of slices along the circumference, end-cap radius, and length. The final resulting mesh is shown in Figure 2.1.
ArtiSynth applications frequently need to read in various kinds of data files, including mesh files (as discussed in Section 2.5.5), FEM mesh geometry (Section 6.2.2), probe data (Section 5.4.4), and custom application data.
Often these data files do not reside in an absolute location but instead in a location relative to the application’s class or source files. For example, it is common for applications to store geometric data in a subdirectory "geometry" located beneath the source directory. In order to access such files in a robust way, and ensure that the code does not break when the source tree is moved, it is useful to determine the application’s source (or class) directory at run time. ArtiSynth supplies several ways to conveniently handle this situation. First, the RootModel itself supplies the following methods:
The first method returns the path to the source directory of the root model, while the second returns the path to a file specified relative to the root model source directory. If the root model source directory cannot be found (see discussion at the end of this section) both methods return null. As a specific usage example, assume that we have an application model whose build() method needs to load in a mesh torus.obj from a subdirectory meshes located beneath the source directory. This could be done as follows:
A more general path finding utility is provided by maspack.util.PathFinder, which provides several static methods for locating source and class directories:
The “find” methods return a string path to the indicated class or source directory, while the “relative path” methods locate the class or source directory and append the additional path relpath. For all of these, the class is determined from classObj, either directly (if it is an instance of Class), by name (if it is a String), or otherwise by calling classObj.getClass(). When identifying a package by name, the name should be either a fully qualified class name, or a simple name that can be located with respect to the packages obtained via Package.getPackages(). For example, if we have a class whose fully qualified name is artisynth.models.test.Foo, then the following calls should all return the same result:
If the source directory for Foo happens to be /home/projects/src/artisynth/models/test, then
will return /home/projects/src/artisynth/models/test/geometry/mesh.obj.
When calling PathFinder methods from within the relevant class, one can specify this as the classObj argument.
With respect to the above example locating the file "meshes/torus.obj", the call to the root model method getSourceRelativePath() could be replaced with
Since this is assumed to be called from the root model’s build method, the “class” can be indicated by simply passing this to getSourceRelativePath().
As an alternative to placing data files in the source directory, one could place them in the class directory, and then use findClassDir() and getClassRelativePath(). If the data files were originally defined in the source directory, it will be necessary to copy them to the class directory. Some Java IDEs will perform this automatically.
The PathFinder methods work by climbing the class’s resource hierarchy. Source directories are assumed to be located relative to the parent of the root class directory, via one of the paths specified by getSourceRootPaths(). By default, this list includes "src", "source", and "bin". Additional paths can be added using addSourceRootPath(path), or the entire list can be set using setSourceRootPaths(paths).
At preset, source directories will not be found if the reference class is contained in a jar file.
ArtiSynth applications often require the use of large data files to specify items such as FEM mesh geometry, surface mesh geometry, or medical imaging data. The size of these files may make it inconvenient to store them in any version control system that is used to store the application source code. As an alternative, ArtiSynth provides a file manager utility that allows such files to be stored on a separate server, and then downloaded on-demand and cached locally. To use this, one starts by creating an instance of a FileManager, using the constructor
where downloadPath is a path to the local directory where the downloaded file should be placed, and remoteSourceName is a URI indicating the remote server location of the files. After the file manager has been created, it can be used to fetch remote files and cache them locally, using various get methods:
Both of these look for the file destName specified relative to the local directory, and return a File handle for it if it is present. Otherwise, they attempt to download the file from the remote source location, place it in the local directory, and return a File handle for it. The location of the remote file is given relative to the remote source URI by destName for the first method and sourceName for the second.
A simple example of using a file manager within a RootModel build() method is given by the following fragment:
Here, a file manager is created that uses a local directory "geometry", located relative to the RootModel source directory (see Section 2.6), and looks for missing files relative to the URI
http://myserver.org/artisynth/data/geometry
The get() method is then used to obtain the file "tibia.obj" from the local directory. If it is not already present, it is downloaded from the remote location.
The FileManager contains other features and functionality, and one should consult its API documentation for more information.
This section details how to build basic multibody-type mechanical models consisting of particles, springs, rigid bodies, joints, and other constraints.
The most basic type of mechanical model consists simply of particles connected together by axial springs. Particles are implemented by the class Particle, which is a dynamic component containing a three-dimensional position state, a corresponding velocity state, and a mass. It is an instance of the more general base class Point, which is used to also implement spatial points such as markers which do not have a mass.
An axial spring is a simple spring that connects two points and is
implemented by the class
AxialSpring. This is a force effector component that exerts equal and opposite forces on the
two points, along the line separating them, with a magnitude that
is a function
of the distance
between the points,
and the distance derivative
.
Each axial spring is associated with an axial material,
implemented by a subclass of
AxialMaterial, that specifies
the function . The most basic type of axial material is
a LinearAxialMaterial, which
determines
according to the linear relationship
![]() |
(3.1) |
where is the rest length and
and
are the stiffness and
damping terms. Both
and
are properties of the material, while
is a property of the spring.
Axial springs are assigned a linear axial material by default. More complex, nonlinear axial materials may be defined in the package artisynth.core.materials. Setting or querying a spring’s material may be done with the methods setMaterial() and getMaterial().
An complete application model that implements a simple particle-spring model is given below.
Line 1 of the source defines the package in which the model class will reside, in this case artisynth.demos.tutorial. Lines 3-8 import definitions for other classes that will be used.
The model application class is named ParticleSpring and declared to extend RootModel (line 13), and the build() method definition begins at line 15. (A no-args constructor is also needed, but because no other constructors are defined, the compiler creates one automatically.)
To begin, the build() method creates a MechModel named "mech", and then adds it to the models list of the root model using the addModel() method (lines 18-19). Next, two particles, p1 and p2, are created, with masses equal to 2 and initial positions at 0, 0, 0, and 1, 0, 0, respectively (lines 22-23). Then an axial spring is created, with end points set to p1 and p2, and assigned a linear material with a stiffness and damping of 20 and 10 (lines 24-27). Finally, after the particles and the spring are created, they are added to the particles and axialSprings lists of the MechModel using the methods addParticle() and addAxialSpring() (lines 30-32).
At this point in the code, both particles are defined to be
dynamically controlled, so that running the simulation would cause
both to fall under the MechModel’s default gravity acceleration
of . However, for this example, we want the first
particle to remain fixed in place, so we set it to be non-dynamic (line 34), meaning that the physical simulation will not
update its position in response to forces (Section
3.1.3).
The remaining calls control aspects of how the model is graphically rendered. setBounds() (line 37) increases the model’s “bounding box” so that by default it will occupy a larger part of the viewer frustum. The convenience method RenderProps.setSphericalPoints() is used to set points p1 and p2 to render as solid red spheres with a radius of 0.06, while RenderProps.setCylindricalLines() is used to set spring to render as a solid blue cylinder with a radius of 0.02. More details about setting render properties are given in Section 4.3.
By default, a dynamic component is advanced through time in response to the forces applied to it. However, it is also possible to set a dynamic component’s dynamic property to false, so that it does not respond to force inputs. As shown in the example above, this can be done using the method setDynamic():
comp.setDynamic (false);
The method isDynamic() can be used to query the dynamic property.
Dynamic components can also be attached to other dynamic components (as mentioned in Section 1.2) so that their positions and velocities are controlled by the master components that they are attached to. To attach a dynamic component, one creates an AttachmentComponent specifying the attachment connection and adds it to the MechModel, as described in Section 3.7. The method isAttached() can be used to determine if a component is attached, and if it is, getAttachment() can be used to find the corresponding AttachmentComponent.
Overall, a dynamic component can be in one of three states:
Component is dynamic and unattached. The method isActive() returns true. The component will move in response to forces.
Component is not dynamic, and is unattached. The method isParametric() returns true. The component will either remain fixed, or will move around in response to external inputs specifying the component’s position and/or velocity. One way to supply such inputs is to use controllers or input probes, as described in Section 5.
Component is attached. The method isAttached() returns true. The component will move so as to follow the other master component(s) to which it is attached.
Application authors may create their own axial materials by subclassing AxialMaterial and overriding the functions
where excitation is an additional excitation signal , which
is used to implement active springs and which in particular is used to
implement axial muscles (Section 4.5), for
which
is usually in the range
.
The first three methods should return the values of
![]() |
(3.2) |
respectively, while the last method should return true if
; i.e., if it is
always equals to 0.
Mechanical models usually contain damping forces in addition to spring-type restorative forces. Damping generates forces that reduce dynamic component velocities, and is usually the major source of energy dissipation in the model. Damping forces can be generated by the spring components themselves, as described above.
A general damping can be set for all particles by setting the MechModel’s pointDamping property. This causes a force
![]() |
(3.3) |
to be applied to all particles, where is the value of the pointDamping and
is the particle’s velocity.
pointDamping can be set and queried using the MechModel methods
In general, whenever a component has a property propX, that property can be set and queried in code using methods of the form
setPropX (T d); T getPropX();where T is the type associated with the property.
pointDamping can also be set for particles individually. This property is inherited (Section 1.4.3), so that if not set explicitly, it inherits the nearest explicitly set value in an ancestor component.
Rigid bodies are implemented in ArtiSynth by the class RigidBody, which is a dynamic component containing a six-dimensional position and orientation state, a corresponding velocity state, an inertia, and an optional surface mesh.
A rigid body is associated with its own 3D spatial coordinate frame, and is a subclass of the more general Frame component. The combined position and orientation of this frame with respect to world coordinates defines the body’s pose, and the associated 6 degrees of freedom describe its “position” state.
ArtiSynth makes extensive use of markers, which are (massless) points attached to dynamic components in the model. Markers are used for graphical display, implementing attachments, and transmitting forces back onto the underlying dynamic components.
A frame marker is a marker that can be attached to a Frame, and most commonly to a RigidBody (Figure 3.2). They are frequently used to provide the anchor points for attaching springs and, more generally, applying forces to the body.
Frame markers are implemented by the class FrameMarker, which is a subclass of Point. The methods
get and set the marker’s location with respect to the frame’s
coordinate system. When a 3D force
is applied to the marker, it
generates a spatial force
(Section
A.5) on the frame given by
![]() |
(3.4) |
Frame markers can be created using a variety of constructors, including
where FrameMarker() creates an empty marker, FrameMarker(name) creates an empty marker with a name, and FrameMarker(frame,loc) creates an unnamed marker attached to frame at the location loc with respect to the frame’s coordinates. Once created, a marker’s frame can be set and queried with
A frame marker can be added to a MechModel with the MechModel methods
where addFrameMarker(mkr,frame,loc) also sets the frame and the marker’s location with respect to it.
MechModel also supplies convenience methods to create a marker, attach it to a frame, and add it to the model:
Both methods return the created marker. The first, addFrameMarker(frame,loc), places it at the location loc with respect to the frame, while addFrameMarkerWorld(frame,pos) places it at pos with respect to world coordinates.
A simple rigid body-spring model is defined in
artisynth.demos.tutorial.RigidBodySpring
This differs from ParticleSpring only in the build() method, which is listed below:
The differences from ParticleSpring begin
at line 9. Instead of creating a second particle, a rigid body is
created using the factory method
RigidBody.createBox(), which
takes x, y, z widths and a (uniform) density and creates a box-shaped
rigid body complete with surface mesh and appropriate mass and
inertia. As the box is initially centered at the origin, moving it
elsewhere requires setting the body’s pose, which is done using setPose(). The RigidTransform3d passed to setPose() is
created using a three-argument constructor that generates a
translation-only transform. Next, starting at line 14, a FrameMarker is created for a location relative to the
rigid body, and attached to the body using its setFrame()
method.
The remainder of build() is the same as for ParticleSpring, except that the spring is attached to the frame marker instead of a second particle.
As illustrated above, rigid bodies can be created using factory methods supplied by RigidBody. Some of these include:
The bodies do not need to be named; if no name is desired, then name and can be specified as null.
In addition, there are also factory methods for creating a rigid body directly from a mesh:
These take either a polygonal mesh (Section 2.5), or a file name from which a mesh is read, and use it as the body’s surface mesh and then compute the mass and inertia properties from the specified (uniform) density.
When a body is created directly from a surface mesh, its center of mass will typically not be coincident with the origin of its coordinate frame. Section 3.2.6 discusses the implications of this and how to correct it.
Alternatively, one can create a rigid body directly from a constructor, and then set the mesh and inertia properties explicitly:
A body’s pose can be set and queried using the methods
These use a RigidTransform3d (Section
2.2) to describe the pose. Body poses are
described in world coordinates and specify the transform from body to
world coordinates. In particular, the pose for a body A specifies
the rigid transform .
Rigid bodies also expose the translational and rotational components of their pose via the properties position and orientation, which can be queried and set independently using the methods
The velocity of a rigid body is described using a Twist (Section 2.4), which contains both the translational and rotational velocities. The following methods set and query the spatial velocity as described with respect to world coordinates:
During simulation, unless a rigid body has been set to be parametric (Section 3.1.3), its pose and velocity are updated in response to forces, so setting the pose or velocity generally makes sense only for setting initial conditions. On the other hand, if a rigid body is parametric, then it is possible to control its pose during the simulation, but in that case it is better to set its target pose and/or target velocity, as described in Section 5.3.1.
The “mass” of a rigid body is described by its spatial inertia,
which is a matrix relating its spatial velocity to its
spatial momentum (Section A.6). Within
ArtiSynth, spatial inertia is described by a
SpatialInertia object, which
specifies its mass, center of mass (with respect to body coordinates),
and rotational inertia (with respect to the center of mass).
Most rigid bodies are also associated with a polygonal surface mesh, which can be set and queried using the methods
The second method takes an optional fileName argument that can be set to the name of a file from which the mesh was read. Then if the model itself is saved to a file, the model file will specify the mesh using the file name instead of explicit vertex and face information, which can reduce the model file size considerably.
Rigid bodies can also have more than one mesh, as described in Section 3.2.9.
The inertia of a rigid body can be explicitly set using a variety of methods including
and can be queried using
In practice, it is often more convenient to simply specify a mass or a density, and then use the geometry of the surface mesh (and possibly other meshes, Section 3.2.9) to compute the remaining inertial values. How a rigid body’s inertia is computed is determined by its inertiaMethod property, which can be one
Inertia is set explicitly.
Inertia is determined implicitly from the mesh geometry and the body’s mass.
Inertia is determined implicitly from the mesh geometry and the body’s density (which is multiplied by the mesh volume(s) to determine a mass).
When using DENSITY to determine the inertia, it is generally assumed that the contributing meshes are both polygonal and closed. Meshes which are either open or non-polygonal generally do not have a well-defined volume which can be multiplied by the density to determine the mass.
The inertiaMethod property can be set and queried using
and its default value is DENSITY. Explicitly setting the inertia using one of setInertia() methods described above will set inertiaMethod to EXPLICIT. The method
will (re)compute the inertia using the mesh geometry and a density value and set inertiaMethod to DENSITY, and the method
will (re)compute the inertia using the mesh geometry and a mass value and set inertiaMethod to MASS.
Finally, the (assumed uniform) density of the body can be queried using
There are some subtleties involved in determining the inertia using either the DENSITY or MASS methods when the rigid body contains more than one mesh. Details are given in Section 3.2.9.
It is important to note that the origin of a body’s coordinate frame will not necessarily coincide with its center of mass (COM), and in fact the frame origin does not even have to lie inside the body’s surface (Figure 3.4). This typically occurs when a body’s inertia is computed directly from its surface mesh (or meshes), as described in Section 3.2.5.
Having the COM differ from the frame origin may lead to some undesired effects. For instance, since the body’s spatial velocity is defined with respect to the frame origin and not the COM, if the two are not coincident, then a purely angular body velocity will cause the COM to translate. The body’s spatial inertia also becomes more complicated, with non-zero 3 x 3 blocks in the lower left and upper right (Section A.6), which can have a small effect on computational accuracy. Finally, manipulating a body’s pose in the ArtiSynth UI (as described in the section “Model Manipulation” in the ArtiSynth User Interface Guide) can also be more cumbersome if the origin is located far from the COM.
There are several ways to ensure that the COM and frame origin are coincident. The most direct is to call the method centerPoseOnCenterOfMass() after the body has been created:
This will shift the body’s frame to be coincident with the COM, while at the same time translating its mesh vertices in the opposite direction so that its mesh (or meshes) don’t move with respect to world coordinates. The spatial inertia is updated as well.
Alternatively, if the body is being created from a single mesh, one may transform that mesh to be centered on its COM before it is used to define the body. This can be done using the PolygonalMesh method translateToCenterOfVolume(), which centers a mesh’s vertices on its COM (assuming a uniform density):
As with particles, it is possible to set damping parameters for rigid bodies. Damping can be specified in two different ways:
Translational/rotational damping which is proportional to a body’s translational and rotational velocity;
Inertial damping, which is proportional to a body’s spatial inertia multiplied by its spatial velocity.
Translational/rotational damping is controlled by the MechModel properties frameDamping and rotaryDamping, and generates a spatial force centered on each rigid body’s coordinate frame given by
![]() |
(3.5) |
where and
are the frameDamping and rotaryDamping values, and
and
are the translational
and angular velocity of the body’s coordinate frame. The damping
parameters can be set and queried using the MechModel methods
These damping parameters can also be set for individual bodies using their own (inherited) frameDamping and rotaryDamping properties.
For models involving rigid bodies, it is often necessary to set rotaryDamping to a non-zero value because frameDamping will provide no damping at all when a rigid body is simply rotating about its coordinate frame origin.
Inertial damping is controlled by the MechModel property inertialDamping, and generates a spatial force centered on a rigid body’s coordinate frame given by
![]() |
(3.6) |
where is the inertialDamping,
is the body’s
spatial inertia matrix (Section A.6), and
is the body’s spatial velocity. The inertial damping property
can be set and queried using the MechModel methods
This parameter can also be set for individual bodies using their own (inherited) inertialDamping property.
Inertial damping offers two advantages over translational/rotational damping:
- 1.
It is independent of the location of the body’s coordinate frame with respect to its center of mass;
- 2.
There is no need to adjust two different translational and rotational parameters or to consider their relative sizes, as these considerations are contained within the spatial inertia itself.
A rigid body is rendered in ArtiSynth by drawing its mesh (or meshes, Section 3.2.9) and/or coordinate frame.
Meshes are drawn using the face rendering properties described in more detail in Section 4.3. The most commonly used of these are:
faceColor: A value of type java.awt.Color giving the color of mesh faces. The default value is GRAY.
shading: A value of type Renderer.Shading indicating how the mesh should be shaded, with the options being FLAT, SMOOTH, METAL, and NONE. The default value is FLAT.
alpha: A double value between 0 and 1 indicating transparency, with transparency increasing as value decreases from 1. The default value is 1.
faceStyle: A value of type Renderer.FaceStyle indicating which face sides should be drawn, with the options being FRONT, BACK, FRONT_AND_BACK, and NONE. The default value is FRONT.
drawEdges: A boolean indicating whether the mesh edges should also be drawn, using either the edgeColor rendering property, or the lineColor property if edgeColor is not set. The default value is false.
edgeWidth: An integer giving the width of the mesh edges in pixels.
These properties, and others, can be set either interactively in the GUI, or in code. To set the render properties in the GUI, select the rigid body or its mesh component, and then right click the mouse and choose Edit render props .... More details are given in the section “Render properties” in the ArtiSynth User Interface Guide.
![]() |
![]() |
![]() |
Properties can also be set in code, usually during the build() method. Typically this is done using a static method of the RenderProps class that has the form
where XXX is the property name, comp is the component for which the property should be set, and value is the desired value. Some examples are shown in Figure 3.5 for a rigid body hip representation with a fairly coarse mesh. The left image shows the default rendering, using a gray color and flat shading. The center image shows a lighter color and smooth shading, which could be set by the following code fragment:
Finally, the right image shows the body rendered as a wire frame, which can by done by setting faceStyle to NONE and drawEdges to true:
Render properties can also be set in higher level model components, from which their values will be inherited by lower level components that have not explicitly set their own values. For example, setting the faceColor render property in the MechModel will automatically set the face color for all subcomponents which have not explicitly set faceColor. More details on render properties are given in Section 4.3.
![]() |
![]() |
In addition to mesh rendering, it is often useful to draw a rigid
body’s coordinate frame, which can be done using its axisLength
and axisDrawStyle properties. Setting axisLength to a
positive value will cause the body’s three coordinate axes to be
drawn, with the indicated length, with the ,
and
axes
colored red, green, and blue, respectively. The axisDrawStyle
property controls how the axes are rendered (Figure
3.6). It has the type
Renderer.AxisDrawStyle, and can
be set to the following values:
Axes are not rendered.
Axes are rendered as simple red-green-blue lines, with a width given by the joint’s lineWidth rendering property.
Axes are rendered as solid red-green-blue arrows.
As with the rendering proprieties, the axisLength and axisDrawStyle properties can be managed either interactively in the GUI (by selecting the body, right clicking and choosing Edit properties ...), or in code, using the following methods:
A RigidBody may contain multiple meshes, which can be useful for various reasons:
It may be desirable to use different meshes for collision detection, inertia computation, and visual presentation;
Different render properties can be set for different mesh components, allowing the body to be rendered in a more versatile way;
Different mesh components can be selected individually.
Each rigid body mesh is encapsulated inside a RigidMeshComp component, which is in turn stored in a subcomponent list called meshes. Meshes do not need to be instances of PolygonalMesh; instead, they can be any instance of MeshBase, including PointMesh and PolylineMesh.
The default surface mesh, returned by getSurfaceMesh(), is also stored inside a RigidMeshComp in the meshes list. By default, the surface mesh is the first mesh in the list, but is otherwise defined to be the first mesh in meshes which is also an instance of PolygonalMesh. The RigidMeshComp containing the surface mesh can be obtained using the method getSurfaceMeshComp().
A RigidMeshComp contains a number of properties that control how the mesh is displayed and interacts with its rigid body:
Render properties controlling how the mesh is rendered (see Section 4.3).
A boolean, which if true means that the mesh will contribute to the body’s inertia when the inertiaMethod is either MASS or DENSITY. The default value is true.
An enumerated type defined by MassDistribution which specifies how the mesh’s inertia contribution is determined for a given mass. VOLUME, AREA, LENGTH, and POINT indicate, respectively, that the mass is distributed evenly over the mesh’s volume, area (faces), length (edges), or points. The default value is determined by the mesh type: VOLUME for a closed PolygonalMesh, AREA for an open PolygonalMesh, LENGTH for a PolylineMesh, and POINT for a PointMesh. Applications can specify an alternate value providing the mesh has the features to support it. Specifying DEFAULT will restore the default value.
A double whose value is the volume of the mesh. If the mesh is a PolygonalMesh, this is the value returned by its computeVolume() method. Otherwise, the volume is 0, unless setVolume(vol) is used to explicitly set a non-zero volume value.
A double whose default value is the product of the density and volume properties. Otherwise, if mass has been explicitly set using setMass(mass), the value is the explicit mass.
A double whose default value is the rigid body’s density. Otherwise, if density has been explicitly set using setDensity(density), the value is the explicit density, or if mass has been explicitly set using setMass(mass), the value is the explicit mass divided by volume.
Note that by default, the density of a RigidMeshComp is simply the density setting for the rigid body, and the mass is this times the volume. However, it is possible to set either an explicit mass or a density value that will override this. (Also, explicitly setting a mass will unset any explicit density, and explicitly setting the density will unset any explicit mass.)
When the inertiaMethod of the rigid body is either MASS or
DENSITY, then its inertia is computed from the sum of all the
inertias of the component meshes
for which hasMass is
true. Each
is computed by the mesh’s
createInertia(mass,massDistribution) method,
using the mass and massDistribution properties of
its RigidMeshComp.
When forming the body inertia from the inertia components of individual meshes, no attempt is made to account for mesh overlap. If this is important, the meshes themselves should be modified in advance so that they do not overlap, perhaps by using the CSG primitives described in Section 2.5.7.
Instances of RigidMeshComp can be created directly, using constructions such as
or
after which they can be added or removed from the meshes list using the methods
It is also possible to add meshes directly to the meshes list, using the methods
each of which creates a RigidMeshComp, adds it to the mesh list, and returns it. The second method also specifies the values of the hasMass and collidable properties (both of which are true by default).
An example of constructing a rigid body from multiple meshes is defined in
artisynth.demos.tutorial.RigidCompositeBody
This uses three meshes to construct a rigid body whose shape resembles a dumbbell. The code, with the include files omitted, is listed below:
As in the previous examples, the build() method starts by creating a MechModel (lines 6-7). Three different meshes (two balls and an axis) are then constructed at lines 10-15, using MeshFactory methods (Section 2.5) and transforming each result to an appropriate position/orientation with respect to the body’s coordinate frame.
The body itself is constructed at lines 18-24. Its default density is set to 10, and its frame damping (Section 3.2.7) is also set to 10 (the previous rigid body example in Section 3.2.2 relied on spring damping to dissipate energy). The meshes are added using addMesh(), which allocates and returns a RigidMeshComp. For the ball meshes, these are saved in bcomp1 and bcomp2 and used later to adjust density and/or render properties.
Lines 27-34 create a simple linear spring, connected to a fixed point p0 and a marker mkr. The marker is created and attached to the body by the MechModel method addFrameMarkerWorld(), which places the marker at a known position in world coordinates. The spring is created using an AxialSpring constructor that accepts a name, along with stiffness, damping, and rest length parameters to specify a LinearAxialMaterial.
At line 37, bcomp1 is used to set the density of ball1 to 8. Since this is less than the default body density, the inertia component of ball1 will be lighter than that of ball2. Finally, render properties are set at lines 41-45. This includes setting the default face colors for the body and for each ball.
To run this example in ArtiSynth, select All demos > tutorial > RigidCompositeBody from the Models menu. The model should load and initially appear as in Figure 3.7. Running the model (Section 1.5.3) will cause the rigid body to fall and swing about under gravity, with the right ball (ball1) not falling as far because it has less density.
In a typical mechanical model, many of the rigid bodies are interconnected, either using spring-type components that exert binding forces on the bodies, or through joints and connectors that enforce the connection using hard constraints. This section describes the latter. While the discussion focuses on rigid bodies, joints and connectors can be used more generally with any body that implements the ConnectableBody interface. In particular, this allows joints to also interconnect finite element models, as described in Section 6.6.2.
Consider two rigid bodies A and B. The pose of body B with respect to
body A can be described by the 6 DOF rigid transform . If A
and B are unconnected,
may assume any possible value and has
a full six degrees of freedom. A joint between A and B
constrains the set of poses that are possible between the two bodies
and reduces the degrees of freedom available to
. For ease
of use, the constraining action of a joint is described with respect
to a pair of local coordinate frames C and D that are connected to
frames A and B, respectively, by auxiliary transformations. This
allows joints to be placed at locations that do not correspond
directly to frames A or B.
The joint frames C and D move with respect to each other as the joint
moves. The allowed joint motions therefore correspond to the allowed
values of the joint transform . Although both frames
typically move with their attached bodies, D is considered the base frame and C the motion frame (this is because when a joint
is used to connect a single body to ground, body B is set to null and the world frame takes its place). As an example of a
joint’s constraining effect, consider a hinge joint (Figure
3.8), which allows C to move with respect to D only by
rotating about the
axis while the origins of C and D remain
coincident. Other motions are prohibited. If we let
describe
the counter-clockwise rotation angle of C about the
axis, then
should always have the form
![]() |
(3.7) |
When a joint is attached to bodies A and B, frame C is fixed to body A
and frame D is fixed to body B. Except in special cases, the joint
frames C and D are not coincident with the body frames A
and B. Instead, they are located relative to A and B by the
transforms and
, respectively
(Figure 3.9). Since
and
are
both fixed, the joint constraints on
constrain the relative
poses of A and B, with
determined from
![]() |
(3.8) |
(See Section A.2 for a discussion of determining transforms between related coordinate frames).
Each different joint and connector type restricts the motion between
two bodies to degrees of freedom, for some
. Sometimes,
the joint also defines a set of
coordinates that parameterize
these
DOFs. For example, the hinge joint described above is
parameterized by
. Other examples are given in
Section 3.4: a 2 DOF cylindrical has coordinates
and
, a 3 DOF gimbal joint is parameterized by the
roll-pitch-yaw angles
,
, and
, etc. When
(where
is the identity transform), the coordinates
are usually all equal to zero, and the joint is said to be in the zero state.
As explained in Section 1.2, ArtiSynth uses a
full coordinate formulation for dynamic simulation. That means that
instead of using joint coordinates to describe system state, it uses
the combined full coordinates of all dynamic components. For
example, a model consisting of a single rigid body connected to ground
by a hinge joint will have 6 DOF (corresponding to the 6 DOF of the
body), rather than the 1 DOF implied by the hinge joint. The DOF
restrictions imposed by the joints are then enforced by a set of
linearized constraint relationships
![]() |
(3.9) |
that restrict the body velocities computed at each simulation
step, usually by solving an MLCP like (1.6). As
explained in Section 1.2, the right side
vectors
and
in (3.9) contain time
derivative terms, which for simplicity much of the following
presentation will assume to be 0.
Each joint contributes its own set of constraint equations to (3.9). Typically these take the form of bilateral, or equality, constraints
![]() |
(3.10) |
which are added to the system’s global bilateral constraint matrix
.
contains
rows providing
individual
constraints
.
During simulation, these give rise to
constraint
forces (corresponding to
in (1.8))
which enforce the constraints.
In some cases, the joint also maintains unilateral, or inequality
constraints, to keep out of inadmissible regions. These take
the form
![]() |
(3.11) |
and are added to the system’s global unilateral constraint matrix
. They give rise to constraint forces corresponding to
in
(1.8). A common use of unilateral constraints
is to enforce range limits of the joint coordinates (Section
3.3.5), such as
![]() |
(3.12) |
A specific unilateral constraint is added to only when
is on or within the boundary of the inadmissible region
associated with that constraint. The constraint is then said to be
engaged. The combined number of bilateral and engaged unilateral
constraints for a particular joint should not exceed 6; otherwise, the
joint would be overconstrained.
Joint coordinates, when supported for a particular joint, can be both
read and set. Setting a coordinate causes the joint transform
to change. To accommodate this, the system adjusts the poses
of one or both bodies connected to the joint, along with adjacent
bodies connected to them, with preference given to bodies that are not
attached to “ground”. However, if this is done during simulation,
and particularly if one or both of the bodies connected to the joint
are moving dynamically, the results will be unpredictable and will
likely conflict with the simulation.
Joint coordinates are also often exported as properties. For example,
the
HingeJoint
class (Section 3.4) exports its coordinate
as the property theta, which can be accessed in the GUI, or via
the accessor methods
Since joint constraints are generally nonlinear, their linearized
enforcement at the velocity level by (3.9) will
usually produce small errors as the simulation proceeds. These errors
are reduced using a position correction step described in
Section 4.9.1 and [11].
Errors can also be caused by joint compliance
(Section 3.3.8). Both effects mean that the joint
transform may deviate from the allowed values dictated by
the joint type. In ArtiSynth, this is accounted for by introducing an
additional constraint frame G between D and C
(Figure 3.10). G is computed to be the nearest frame
to C that lies exactly in the joint constraint space.
is
therefore a valid joint transform,
accommodates the error,
and the whole joint transform is given by the composition
![]() |
(3.13) |
If there is no compliance or joint error, then frames G and C are
identical, , and
. Because
describes the joint error, we sometimes refer to it as
.
Joint and connector components in ArtiSynth are both derived from the superclass BodyConnector, with joints being further derived from JointBase, which provides support for coordinates. Some of the commonly used joints and connectors are described in Section 3.4.
An application creates a joint by constructing it and adding it to a MechModel. Many joints have constructors of the form
which specifies the bodies A and B which the joint connects,
along with the transform giving the pose of the joint base
frame D in world coordinates. The constructor then assumes that the
joint is in the zero state, so that C and D are the same and
and
, and then computes
and
from
![]() |
![]() |
(3.14) | ||
![]() |
![]() |
(3.15) |
where and
are the current poses of A and B.
After the joint is created, it should be added to the system’s MechModel using addBodyConnector(), as shown in the following code fragment:
It is also possible to create a joint using its default constructor and attach it to the bodies afterward, using the method setBodies(bodyA,bodyB,TDW), as in the following:
One reason for doing this is that it allows the joint transform
to be modified (by setting coordinate values) before
setBodies() is called; this is discussed further in
Section 3.3.4.
Joints usually offer a number of other constructors that let its world location and body relationships to be specified in different ways. These may include:
The first, which is restricted to rigid bodies, allows the application
to explicitly specify transforms and
connecting
frames C and D to the body frames A and B, and is useful when
and
are explicitly known, or the initial value of
is not the identity. Likewise, the second constructor
allows
and
to be explicitly specified, with
if
. For instance, suppose
and
are both known. Then we can use the
relationship
![]() |
(3.16) |
to create the joint as in the following code fragment:
As an alternative to specifying or its equivalents, some
joint types provide constructors that let the application locate
specific joint features. These may be easier to use in some cases. For
instance, HingeJoint provides a
constructor
that specifies origin of D and its axis (which is the rotation
axis), with the remaining orientation of D aligned as closely as
possible with the world.
SphericalJoint provides a
constructor
that specifies origin of D and aligns its orientation with the world. Users should consult the source code or API documentation for specific joints to see what special constructors may be available.
Finally, it is possible to use joints to connect a single body to ground (by convention, this is the A body). Most joints provide a constructor of the form
which allows this to be done explicitly. Alternatively, most joint constructors which supply body B will allow this to be specified as null, so that body A will be connected to ground by default.
As mentioned in Section 3.3.2, some joints
support coordinates that parameterize the valid motions within the
joint transform . All such joints are subclasses of
JointBase,
which provides some generic methods for querying and setting
coordinate values (JointBase is in turn a subclass of
BodyConnector).
The number of coordinates is returned by the method numCoordinates(); if this returns 0, then coordinates are not
supported. Each coordinate has an index in the range ,
where
is the number of coordinates. Coordinate values can be
queried or set using the following methods:
Specific joint types usually also provide names for their joint
coordinates, along with integer constants describing their indices
and methods for accessing their values. For example,
CylindricalJoint
supports two coordinates, and
, along with
the following:
The coordinate values are also exported as the properties z and theta, allowing them to be set in the GUI. For convenience, particularly in GUI applications, the properties and methods for controlling specific angular coordinates generally use degrees instead of radians.
As discussed in Section 3.3.2, unlike in some multibody simulation systems (such as OpenSim), joint coordinates are not fundamental quantities that describe system state. As such, then, coordinates can usually only be set in specific circumstances that avoid simulation conflicts. In general, when joint coordinates are set, the system adjusts the poses of one or both bodies connected to this joint, along with adjacent bodies connected to them, with preference given to bodies that are not attached to “ground”. However, if this is done during simulation, and particularly if one or both of the bodies connected to the joint are moving dynamically, the results will be unpredictable and will likely conflict with the simulation.
If a joint has been created with its default constructor and not yet
attached to any bodies, then setting joint values will simply set the
joint transform . This can be useful in situations where one
needs to initialize a joint’s
to a non-identity value
corresponding to a particular set of joint coordinates:
This can also be done in vector form:
In either of these cases, setBodies() will not use but instead use the value determined by the initial coordinate
values.
To determine the corresponding to a particular set of
coordinates, one may use the method
In some cases, within a model’s build() method, one may wish to set initial coordinates after a joint has been attached to its bodies, in order to move those bodies (along with the bodies attached to them) into an initial configuration without having to explicitly calculate the poses from the joint coordinates. As mentioned above, the system will make a decision about which attached bodies are most “free” and adjust their poses accordingly. This is done in the example of the next section.
It is possible to set limits on a joint coordinate’s range, and also to lock a coordinate in place at its current value.
When a joint coordinate hits either an upper or lower range limit, a unilateral constraint is invoked to prevent it from violating the limit, and remains engaged until the joint moves away from the limit. Each range constraint that is engaged reduces the number of joint DOFs by one.
By default, joint range limits are usually disabled (i.e., they are
set to ). They can be queried and set, for a given
joint with index idx, using the methods:
where range limits for angular coordinates are specified in radians. For convenience, the following methods are also provided which use degrees instead of radians for angular coordinates:
Range checking can be disabled by setting the range to
, or by specifying rng as null, which
implicitly does the same thing.
Ranges for angular coordinates are not limited to
but can instead be set to larger values; the joint will continue to wrap until the limit is reached.
Joint coordinates can also be locked, so that they hold their current value and don’t move. A joint is locked using a bilateral constraint that prevents motion in either direction and reduces the joint’s DOF count by one. The following methods are available for querying or setting a coordinate’s locked status:
As with coordinate values, specific joint types usually provide methods for controlling the ranges and locking status of individual coordinates, with ranges for angular coordinates specified in degrees instead of radians. For example, CylindricalJoint supplies the methods
The range and locking information is also exported as the properties zRange, thetaRange, zLocked, and thetaLocked, allowing them to be set in the GUI.
A simple model showing two rigid bodies connected by a joint is defined in
artisynth.demos.tutorial.RigidBodyJoint
The build method for this model is given below:
A MechModel is created as usual at line 4. However, in this
example, we also set some parameters for it:
setGravity() is
used to set the gravity acceleration vector to instead
of the default value of
, and the frameDamping
and rotaryDamping properties (Section
3.2.7) are set to provide appropriate damping.
Each of the two rigid bodies are created from a mesh and a density. The meshes themselves are created using the factory methods MeshFactory.createRoundedBox() and MeshFactory.createRoundedCylinder() (lines 13 and 22), and then RigidBody.createFromMesh() is used to turn these into rigid bodies with a density of 0.2 (lines 17 and 25). The pose of the two bodies is set using RigidTransform3d objects created with x, y, z translation and axis-angle orientation values (lines 18 and 26).
The hinge joint is implemented using HingeJoint, which is constructed at line 32 with the joint coordinate frame D being located in world coordinates by TDW as described in Section 3.3.3.
Once the joint is created and added to the MechModel, the method
setTheta() is
used to explicitly set the joint parameter to 35 degrees. The joint
transform is then set appropriately and bodyA is moved
to accommodate this (bodyA being chosen since it is the most
free to move).
Finally, joint rendering properties are set starting at line 42. We render the joint as a cylindrical shaft about the rotation axis, using its shaftLength and shaftRadius properties. Joint rendering is discussed in more detail in Section 3.3.10).
During each simulation solve step, the joint velocity constraints
described by (3.10) and (3.11) are
enforced by bilateral and unilateral constraint forces and
:
![]() |
(3.17) |
Here, and
are spatial forces (or wrenches, Section
A.5) acting in the joint coordinate
frame C, and
and
are the Lagrange multipliers
computed as part of the mechanical system solve (see
(1.6) and (1.8)). The sizes
of
and
equal the number of bilateral and engaged unilateral constraints in the joint; these numbers can be
queried for a particular joint using the methods
numBilateralConstraints()
and
numEngagedUnilateralConstraints().
(The number of engaged unilateral constraints may be less than the
total number of unilateral constraints; the latter may be queried with
numUnilateralConstraints(),
while the total number of
constraints is returned by
numConstraints().
Applications may sometimes need to query the current constraint force values, typically from within a controller or monitor (Section 5.3). The Lagrange multipliers themselves may be obtained with
which load the multipliers into lam or the and set their sizes to the number of bilateral or engaged unilateral constraints. Alternatively, one can retrieve the individual multiplier for the constraint indexed by idx using
Typically, it is more useful to find the spatial constraint forces
and
, which can be obtained with respect to frame C:
If the attached bodies A and B are rigid bodies, it is also possible to obtain the constraint wrenches experienced by those bodies:
Constraint wrenches obtained for bodies A or B are given in world
coordinates, which is consistent with the forces reported by rigid
bodies via their getForce() method. To orient the forces into
body coordinates, one may use the inverse of the rotation matrix
of the body’s pose. For example:
By default, the constraints used to implement joints and couplings are treated as hard, so that the system tries to respect the constraint conditions (3.9) as exactly as possible as the simulation proceeds. Sometimes, however, it is desirable to introduce some “softness” into the constraints, whereby constraint forces are determined as a linear function of their distance from the constraint. Adding compliance also allows an application to regularize a system of joint constraints that would otherwise be overconstrained, as illustrated in Section 3.3.9.
To describe compliance precisely, consider the bilateral constraint
portion of the MLCP in (1.6), which solves for the
updated system velocities at each time step:
![]() |
(3.18) |
Here is the system’s bilateral constraint matrix,
denotes the constraint impulses (from which the constraint forces
can be determined by
), and for
simplicity we have assumed that
is constant and so the
term
on the lower right side is
.
Solving (3.18) results in constraint forces that satisfy
precisely, corresponding to hard constraints. To
implement soft constraints, start by defining a function
that defines the distances from each constraint, where
is the
vector of system positions; these distances are
the local translational and rotational deviations from each
constraint’s correct position and are discussed in more detail in
Section 4.9.1. Then assume that the constraint
forces are a linear function of these distances:
![]() |
(3.19) |
where is a diagonal compliance matrix that is equivalent to
an inverse stiffness matrix. We also note that
will be time
varying, and that we can approximate its change between time steps as
![]() |
(3.20) |
Next, assume that in using (3.19) to determine
for a particular time step, we use the average value of
over the step, represented by
.
Substituting this and (3.20) into
(3.19), multiplying by
, and rearranging yields:
![]() |
(3.21) |
Then noting that , we obtain
a revised form of (3.18),
![]() |
(3.22) |
in the which the zeros in the matrix and right hand side have been replaced by compliance terms. The resulting constraint behavior is different from that of (3.18) in two important ways:
The joint now allows 6 DOF, with motion along the constrained directions
limited by restoring spring constants given by the reciprocals of the
diagonal entries of .
Unilateral constraints can be regularized using the same approach,
with a distance function defined such that .
The reason for specifying soft constraints using compliance instead of
stiffness is that by setting we can easily handle the case of
infinite stiffness where the constraints are strictly enforced.
The ArtiSynth compliance implementation uses a slightly more complex
version of (3.22) that accounts for non-constant
and
also allows for a damping term
, where
is again a
diagonal matrix. For more details, see [9]
and [21].
When using compliance, damping is often needed for stability, and, in the case of unilateral constraints, to prevent “bouncing”. A good choice for damping is usually critical damping, which is discussed further below.
Any joint which is a subclass of
BodyConnector allows
individual compliance values and damping values
to be set
for each of the joint’s
constraints. These values comprise the
diagonal entries in the compliance and damping matrices
and
,
and can be queried and set using the methods
The vectors supplied to the above set methods contain the
requested compliance or damping values. If their size is less than
numConstraints(), then compliance or damping will be set for the
first
constraints. Damping for a specific constraint only has an
effect if the compliance for that constraint is nonzero.
What compliance and damping values should be specified? Compliance is
usually relatively easy to figure out. Each of the joint’s individual
constraints corresponds to a row in its bilateral constraint matrix
or unilateral constraint matrix
, and represents a
specific 6 DOF direction along which the spatial velocity
(of frame C with respect to D) is restricted (more
details on this are given in Section 4.9.1).
Each of these constraint directions is usually predominantly linear or
rotational; specific descriptions for the constraints of different
joints are provided in Section 3.4. To determine
compliance for a constraint
, estimate the typical force
likely
to act along its direction, decide how much displacement
(translational or rotational) along that constraint is desirable, and
then set the compliance
to the associated inverse stiffness:
![]() |
(3.23) |
Once is determined, the damping
can be estimated based on
the desired damping ratio
, using the formula
![]() |
(3.24) |
where is total mass of the bodies attached to the joint.
Typically, the desired damping will be close to critical damping, for
which
.
Constraints associated with linear motion will typically require different compliance values from those associated with rotation. To make this process easier, joint components allow the setting of collective compliance values for their linear and rotary constraints, using the methods
The set() methods will set a uniform compliance for all linear or rotary constraints, except for unilateral constraints associated with coordinate limits. At the same time, they will also set an automatically computed critical damping value. Likewise, the get() methods query these linear or rotary constraints for uniform compliance values (with the corresponding critical damping), and return either that value, or -1 if it does not exist.
Most of the demonstration models for the joints described in Section 3.4 allow these linear and rotary compliance settings to be adjusted interactively using a control panel, enabling users to experimentally gain a feel for their behavior.
To determine programmatically whether a particular constraint is linear or rotary, one can use the joint method
which returns a vector of information flags for all its constraints. Linear and rotary constraints are indicated by the flags LINEAR and ROTARY, defined in RigidBodyConstraint.
Situations may occasionally arise in which a model is overconstrained, which means that the rows of the bilateral
constraint matrix in (3.9) are not all
linearly dependent, or in other words,
does not have full
row rank. At present, the ArtiSynth solver has difficultly handling
overconstrained models, but these situations can often be handled by
adding a small amount of compliance to the
constraints. (Overconstraining is not a problem with unilateral
constraints
, because of the way they are handled by the solver.)
One possible symptom of an overconstrained system is a error message in the application’s terminal output, such as
Pardiso: num perturbed pivots=12
Overconstraining frequently occurs in closed-chain linkages, involving loops in which a jointed sequence of links is connected back on itself. Depending on how the constraints are configured and how redundant they are, the system may still be able to move. A classical example is the four-bar linkage, a common version of which consists of four links, or “bars”, arranged as a parallelogram and connected by hinge joints at the corners. One link is usually connected to ground, and so the remaining three links together have 18 DOF, while the four hinge joints together remove 20 DOF, overconstraining the system. However, the constraints are redundant in such as way that the linkage still actually has 1 DOF.
To model a four-bar in ArtiSynth presently requires adding compliance to the hinge joints. An example of this is defined by the demo program
artisynth.demos.tutorial.FourBarLinkage
shown in Figure 3.12. The code for the build() method and a couple of supporting methods is given below:
Two helper methods are used to construct the model: createLink()
(lines 6-17), and createJoint() (lines 23-36). createLink() makes the individual rigid bodies used to build the
linkage: a mesh is produced defining the body’s shape (a box with
rounded ends), and then passed to the
RigidBody
createFromMesh() method which creates the body and sets its
inertia according to a specified density. The body’s pose is then set
so as to center it at while rotating it about the
axis
by the angle deg (in degrees). The completed body is then added to the
MechModel mech and returned.
The second helper method, createJoint(), connects two rigid
bodies (link0 and link1) together using a HingeJoint. Because we know the location of the joint in
body-relative coordinates, it is easier to create the joint using the
transforms and
instead of
:
locates the joint at the top end of link0, at
,
with the
axis parallel to the body’s
axis, while
similarly locates the joint at the bottom of link1. After the
joint is created and added to the MechModel, its render
properties are set so that its axis drawn as a blue cylinder.
The build() method itself begins by creating a MechModel
and setting damping parameters for the rigid bodies (lines
40-43). Next, createLink() is used to create and store the four
links (lines 46-50), and the left bar is attached to ground by making
it non-dynamic (line 52). The links are then connected together using
joints created by createJoint() (lines 55-59). Finally, uniform
compliance and damping values are set for each of the joint’s
bilateral constraints, using the setCompliance() and setDamping() methods (lines 63-72). Values are set for the first five
constraints, since for a HingeJoint these are the bilateral
constraints. The compliance value of was found
experimentally to be low enough so as to not cause noticeable
deflections in the joints. Given
and an
average mass of around
for each link pair,
(3.24) suggests the damping factor of
. Note that for this example, very similar settings could be
achieved by simply calling
In principle, we only need to set compliance for the constraints that are redundant, but it can sometimes be difficult to determine exactly which these are. Also, different values are often needed for linear and rotary constraints; that is not necessary here because the links have unit length and so the linear and rotary units have similar scales.
Most joints provide a means to render themselves in order to provide a graphical representation of their position and configuration. Control over this is achieved by setting various properties in the joint component, including both specialized properties and the standard render properties (Section 4.3) used by all renderable components.
All joints which are subclasses of JointBase support rendering of both their C and D coordinate frames, through the properties drawFrameC, drawFrameD, and axisLength. The first two properties are of the type Renderer.AxisDrawStyle (described in detail in Section 3.2.8), and can be set to LINE or ARROW to enable the coordinate axes to be drawn either as lines or solid arrows. The axisLength property has type double and specifies the length with which the axes are drawn. As with all properties, these properties can be set either in the GUI, or in code using accessor methods supplied by the joint:
Another pair of properties used by several joints is shaftLength and shaftRadius, which specify the length and radius used to draw shaft or axis structures associated with the joint. These are rendered as solid cylinders, using the color indicated by the faceColor rendering property. The default value of both properties is 0; if shaftLength is 0, then the structures are not drawn, while if shaftRadius is 0, a default value proportional to shaftLength is used. For example, to enable rendering of a blue shaft along the rotation axis of a hinge joint, one may use the code fragment
As another example, to enable rendering of a green ball about the center of a spherical joint, one may use the fragment
Specific joints may define additional properties to control how they are rendered.
ArtiSynth supplies a number of basic joints and connectors in the package artisynth.core.mechmodels, the most common of which are described here.
Many of the descriptions are associated with a demonstration model, named XXXJointDemo, where XXX is the joint type. These demos are located in the package artisynth.demos.mech, and can be loaded by selecting All demos > mech > XXXJointDemo from the Models menu. When run, they can be interactively controlled, using either the pull tool (see the section “Pull Manipulation” in the ArtiSynth User Interface Guide), or the interactive control panel. The control panel allows the adjustment of coordinate values and ranges (if supported), some of the render properties, and the different compliance and damping properties (Section 3.3.8). One can inspect the source code for each demo in its .java file located in the folder <ARTISYNTH_HOME>/src/artisynth/demos/mech.
![]() |
![]() |
The HingeJoint
(Figure 3.13) is a 1 DOF joint that
constrains motion between frames C and D to a simple rotation about
the axis of D. It implements six constraints and one coordinate
(Table 3.1), to which the joint transform
is related by
![]() |
The value and ranges for are exported by the properties theta and thetaRange, and the
coordinate index is
defined by the constant THETA_IDX. For rendering, the
properties shaftLength and shaftRadius control the size of
a shaft drawn about the rotation axis, using the faceColor
rendering property. A demo is provided by
artisynth.demos.mech.HingeJointDemo.
In addition to the standard constructors described in Section 3.3.3,
creates a hinge joint with a specified origin and axis direction
for frame D (in world coordinates), and frames C and D coincident.
Index | type/name | description |
---|---|---|
0 | bilateral | restricts translation along ![]() |
1 | bilateral | restricts translation along ![]() |
2 | bilateral | restricts translation along ![]() |
3 | bilateral | restricts rotation about ![]() |
4 | bilateral | restricts rotation about ![]() |
5 | unilataral | enforces limits on ![]() |
0 | ![]() |
counter-clockwise rotation of ![]() ![]() |
![]() |
![]() |
The SliderJoint
(Figure 3.14) is a 1 DOF joint
that constrains motion between frames C and D to a simple translation
along the axis of D. It implements six constraints and one
coordinate
(Table 3.2), to which the joint
transform
is related by
![]() |
The value and ranges for are exported by the properties z
and zRange, and the
coordinate index is defined by the
constant Z_IDX. For rendering, the properties shaftLength and shaftRadius control the size of a shaft drawn
about the sliding axis, using the faceColor rendering property.
A demo is provided by artisynth.demos.mech.SliderJointDemo.
In addition to the standard constructors described in Section 3.3.3,
creates a slider joint with a specified origin and axis direction
for frame D (in world coordinates), and frames C and D coincident.
Index | type/name | description |
---|---|---|
0 | bilateral | restricts translation along ![]() |
1 | bilateral | restricts translation along ![]() |
2 | bilateral | restricts rotation about ![]() |
3 | bilateral | restricts rotation about ![]() |
4 | bilateral | restricts rotation about ![]() |
5 | unilataral | enforces limits on the ![]() |
0 | ![]() |
translation of ![]() ![]() |
![]() |
![]() |
The CylindricalJoint
(Figure 3.15) is a 2 DOF
joint that constrains motion between frames C and D to translation and
rotation along and about the axis of D. It implements six
constraints and two coordinates
and
(Table 3.3), to which the joint transform
is related by
![]() |
The value and ranges for and
are exported by the
properties z, theta, zRange and thetaRange,
and the coordinate indices are defined by the constants Z_IDX
and THETA_IDX. For rendering, the properties shaftLength
and shaftRadius control the size of a shaft drawn about the
sliding/rotation axis, using the faceColor rendering property.
A demo is provided by artisynth.demos.mech.CylindricalJointDemo.
In addition to the standard constructors described in Section 3.3.3,
creates a cylindrical joint with a specified origin and axis direction
for frame D (in world coordinates), and frames C and D coincident.
Index | type/name | description |
---|---|---|
0 | bilateral | restricts translation along ![]() |
1 | bilateral | restricts translation along ![]() |
2 | bilateral | restricts rotation about ![]() |
3 | bilateral | restricts rotation about ![]() |
4 | unilataral | enforces limits on the ![]() |
5 | unilataral | enforces limits on the ![]() |
0 | ![]() |
translation of ![]() ![]() |
1 | ![]() |
rotation of ![]() ![]() |
![]() |
![]() |
The SlottedHingeJoint
(Figure 3.16) is
a 2 DOF joint that constrains motion between frames C and D to
translation along the axis and rotation about the
axis of D.
It implements six constraints and two coordinates
and
(Table 3.4), to which the joint
transform
is related by
![]() |
(3.25) |
The value and ranges for and
are exported by the
properties x, theta, xRange and thetaRange,
and the coordinate indices are defined by the constants X_IDX
and THETA_IDX. For rendering, the properties shaftLength
and shaftRadius control the size of a shaft drawn about the
rotation axis, while slotWidth and slotDepth control the
width and depth of a slot drawn along the sliding (
) axis; both are
drawn using the faceColor rendering property. When rendering the
slot, its bounds along the
axis are set to xRange by
default. However, this may be too large, particularly if xRange
is unbounded. As an alternate, the property slotRange will be
used instead if its range (i.e., the upper bound minus the lower
bound) exceeds 0. A demo of SlottedHingeJoint is provided by
artisynth.demos.mech.SlottedHingeJointDemo.
In addition to the standard constructors described in Section 3.3.3,
creates a slotted hinge joint with a specified origin and axis direction
for frame D (in world coordinates), and frames C and D coincident.
Index | type/name | description |
---|---|---|
0 | bilateral | restricts translation along ![]() |
1 | bilateral | restricts translation along ![]() |
2 | bilateral | restricts rotation about ![]() |
3 | bilateral | restricts rotation about ![]() |
4 | unilataral | enforces limits on the ![]() |
5 | unilataral | enforces limits on the ![]() |
0 | ![]() |
translation of ![]() ![]() |
1 | ![]() |
rotation of ![]() ![]() |
![]() |
![]() |
Index | type/name | description |
---|---|---|
0 | bilateral | restricts translation along ![]() |
1 | bilateral | restricts translation along ![]() |
2 | bilateral | restricts translation along ![]() |
3 | bilateral | restricts rotation about the final ![]() |
4 | unilataral | enforces limits on the roll coordinate |
5 | unilataral | enforces limits on the pitch coordinate |
0 |
![]() |
first rotation of ![]() ![]() |
1 |
![]() |
second rotation of ![]() ![]() |
The UniversalJoint
(Figure 3.17) is a 2 DOF joint
that allows C two rotational degrees of freedom with respect to D: a
roll rotation about D’s
axis, followed by a pitch rotation
about the rotated
axis. It implements six
constraints and the two coordinates
and
(Table 3.5), to which the joint transform
is related by
![]() |
where
![]() |
The value and ranges for and
are exported by the
properties roll, pitch, rollRange and pitchRange, and the coordinate indices are defined by the constants
ROLL_IDX and PITCH_IDX. For rendering, the properties
shaftLength and shaftRadius control the size of shafts
drawn about the roll and pitch axes, while jointRadius specifies
the radius of a ball drawn around the origin of D; both are drawn
using the faceColor rendering property. A demo is provided by
artisynth.demos.mech.UniversalJointDemo.
The SkewedUniversalJoint
(Figure 3.18) is a version of the universal
joint in which the pitch axis is skewed relative to its nominal direction
by an angle . More precisely, let
and
be the
and