IBM Books

Hitchhiker's Guide


Babel fish

This chapter provides information that will help you translate your MPL parallel program into a program that conforms to the MPI standard. In particular, it tells you which MPI calls to substitute for the ones you use right now in MPL.

In The Hitchhiker's Guide to the Galaxy the Babel Fish is a tiny fish that, when inserted into your ear, can make any language understandable to you. Well, it's not quite that easy to translate a parallel program, but this chapter at least helps you determine how to perform the equivalent or comparable function in MPI that you did with MPL.

Note that the syntax in this section is in C unless we tell you otherwise. For the corresponding Fortran MPI syntax, see IBM Parallel Environment for AIX: MPI Programming and Subroutine Reference . For the corresponding Fortran MPL syntax, see IBM Parallel Environment for AIX: MPL Programming and Subroutine Reference . Another document that may be helpful is A Message-Passing Interface Standard, Version 1.1 available from the University of Tennesee.


Point-to-Point Communication

SEND (Non-Blocking)


MPL/MPI Description
MPL mpc_send(&buf,msglen,dest,tag,&msgid)
MPI MPI_Isend(&buf,count,datatype,dest,tag,comm,&request)

RECEIVE (Non-Blocking)


MPL/MPI Description
MPL mpc_recv(&buf,msglen,&source,&tag,&msgid)
MPI MPI_Irecv(&buf,count,datatype,source,tag,comm,&request)

SEND (Blocking)


MPL/MPI Description
MPL mpc_bsend(&buf,msglen,dest,tag)
MPI MPI_Send(&buf,count,datatype,dest,tag,comm)
Note:Don't confuse MPI_Bsend with MPI_Send. MPI_Bsend is a BUFFERED send, not a BLOCKING send.

RECEIVE (Blocking)


MPL/MPI Description
MPL mpc_brecv(&buf,msglen,&source,&tag,&nbytes)
MPI MPI_Recv(&buf,count,datatype,source,tag,comm,&status)

SEND/RECEIVE (Blocking)


MPI/MPL Description
MPL mpc_bsendrecv(&sendbuf,sendlen,dest,tag,&recvbuf,recvlen,&source,&nbytes)
MPI MPI_Sendrecv(&sendbuf,sendcount,sendtype,dest,tag,&recvbuf,recvcount,recvtype, source,tag,comm,&status)

STATUS


MPI/MPL Description
MPL nbytes = mpc_status(msgid)
MPI MPI_Get_count(&status,MPI_BYTE,&nbytes)

WAIT


MPI/MPL Description
MPL mpc_wait(&msgid,&nbytes)
MPI

For a specific msgid:

  • MPI_Wait(&request,&status)

For msgid = DONTCARE:

  • MPI_Waitany(count,requests,&index,&status)
  • The requests array must be maintained by the user.

For msgid = ALLMSG:

  • MPI_Waitall(count,requests,statuses)
  • The requests array must be maintained by the user.

TASK_SET


MPI/MPL Description
MPL mpc_task_set(nbuf,stype)
MPI

Truncation Mode:

  • No MPI equivalent. Can be simulated by setting the error handler to "return":
    MPI_Errhandler_set(comm,MPI_ERRORS_RETURN);
    

    and testing the return code for receives, waits for receives, etc.:

    MPI_Error_class(rc,&class);
    if(class != MPI_ERR_TRUNCATE)
    { (handle error) }
    

Develop/Run Mode:

  • Enable DEVELOP mode by setting MP_EUIDEVELOP environment variable to YES.

Buffer Mode:

  • Use MPI_Buffer_attach.

TASK_QUERY


MPI/MPL Description
MPL mpc_task_query(nbuf,nelem,qtype)
MPI

Truncation Mode:

  • No MPI equivalent

Message Type Bounds:

   lower bound = 0
   upper bound: int *valptr;
            MPI_Attr_get(MPI_COMM_WORLD,MPI_TAG_UB,&valptr,&flag)
            tag_up_bound = *valptr;

Wildcards:

ALLGRP ( 0)
MPI_COMM_WORLD

DONTCARE (-1)
MPI_ANY_SOURCE, MPI_ANY_TAG

ALLMSG (-2)
No MPI equivalent - see mpc_wait

NULLTASK (-3)
MPI_PROC_NULL

ENVIRON


MPI/MPL Description
MPL mpc_environ(&numtask,&taskid)
MPI
MPI_Comm_size(MPI_COMM_WORLD,&numtask)
MPI_Comm_rank(MPI_COMM_WORLD,&taskid)

STOPALL


MPI/MPL Description
MPL mpc_stopall(errcode)
MPI MPI_Abort(comm,errcode)

PACK


MPI/MPL Description
MPL mpc_pack(&inbuf,&outbuf,blklen,offset,blknum)
MPI
MPI_Type_hvector(1,blklen,offset,MPI_BYTE,&datatype)
position = 0;
outcount = (blknum-1)*offset + blklen;
MPI_Pack(&inbuf,blknum,datatype,&outbuf,outcount,&position,comm)

UNPACK


MPI/MPL Description
MPL mpc_unpack(&inbuf,&outbuf,blklen,offset,blknum)
MPI
MPI_Type_hvector(1,blklen,offset,MPI_BYTE,&datatype)
position = 0;
insize = (blknum-1)*offset + blklen;
MPI_Unpack(&inbuf,insize,&position,&outbuf,blknum,datatype,comm)

VSEND (Blocking)


MPI/MPL Description
MPL mpc_bvsend(&buf,blklen,offset,blknum,dest,tag)
MPI
MPI_Type_hvector(1,blklen,offset,MPI_BYTE,&datatype)
MPI_Send(&buf,blknum,datatype,dest,tag,comm)

VRECV (Blocking)


MPI/MPL Description
MPL mpc_bvrecv(&buf,blklen,offset,blknum,&source,&tag,&nbytes)
MPI
MPI_Type_hvector(1,blklen,offset,MPI_BYTE,&datatype)
MPI_Recv(&buf,blknum,datatype,source,tag,comm,&status)

PROBE


MPI/MPL Description
MPL mpc_probe(&source,&tag,&nbytes)
MPI MPI_Iprobe(source,tag,comm,&flag,&status)
Note:MPI also provides a blocking version of probe: MPI_Probe, which can be substituted for an MPL probe in an infinite loop.


Collective Communications

BROADCAST


MPI/MPL Description
MPL mpc_bcast(&buf,msglen,root,gid)
MPI MPI_Bcast(&buf,count,datatype,root,comm)

COMBINE


MPI/MPL Description
MPL mpc_combine(&sendbuf,&recvbuf,msglen,func,gid)
MPI MPI_Allreduce(&sendbuf,&recvbuf,count,datatype,op,comm)
Note:See "Reduction Functions".

CONCAT


MPI/MPL Description
MPL mpc_concat(&sendbuf,&recvbuf,blklen,gid)
MPI MPI_Allgather(&sendbuf,sendcount,sendtype,&recvbuf,recvcount,recvtype,comm)

GATHER


MPI/MPL Description
MPL mpc_gather(&sendbuf,&recvbuf,blklen,root,gid)
MPI MPI_Gather(&sendbuf,count,datatype,&recvbuf,count,datatype,root,comm)

INDEX


MPI/MPL Description
MPL mpc_index(&sendbuf,&recvbuf,blklen,gid)
MPI MPI_Alltoall(&sendbuf,count,datatype,&recvbuf,count,datatype,comm)

PREFIX


MPI/MPL Description
MPL mpc_prefix(&sendbuf,&recvbuf,msglen,func,gid)
MPI MPI_Scan(&sendbuf,&recvbuf,count,datatype,op,comm)
Note:See "Reduction Functions".

REDUCE


MPI/MPL Description
MPL mpc_reduce(&sendbuf,&recvbuf,msglen,root,func,gid)
MPI MPI_Reduce(&sendbuf,&recvbuf,count,datatype,op,root,comm)
Note:See "Reduction Functions".

SCATTER


MPI/MPL Description
MPL mpc_scatter(&sendbuf,&recvbuf,blklen,root,gid)
MPI MPI_Scatter(&sendbuf,count,datatype,&recvbuf,count,datatype,root,comm)

SHIFT


MPI/MPL Description
MPL mpc_shift(&sendbuf,&recvbuf,msglen,step,flag,gid)
MPI
MPI_Cart_shift(comm,direction,step,&source,&dest)
MPI_Sendrecv(&sendbuf,count,datatype,dest,tag,&recvbuf,count,datatype,
source,tag,comm,&status);

Note:comm must be a communicator with a cartesian topology. See MPI_CART_CREATE in IBM Parallel Environment for AIX: MPI Programming and Subroutine Reference

SYNC


MPI/MPL Description
MPL mpc_sync(gid)
MPI MPI_Barrier(comm)

GETLABEL


MPI/MPL Description
MPL mpc_getlabel(&label,gid)
MPI No MPI equivalent. Can be simulated by creating a label attribute key with MPI_Keyval_create, attaching a label attribute to a communicator with MPI_Attr_put, and retrieving it with MPI_Attr_get.

GETMEMBERS


MPI/MPL Description
MPL mpc_getmembers(&glist,gid)
MPI
MPI_Comm_group(MPI_COMM_WORLD,&group_world)
MPI_Group_size(group_world,&gsize)
for(i=0;i<gsize;i++)  ranks ]i[ = i;
MPI_Group_translate_ranks(group,gsize,&ranks,group_world,&glist)

GETRANK


MPI/MPL Description
MPL mpc_getrank(&rank,taskid,gid)
MPI
MPI_Comm_group(MPI_COMM_WORLD,&group_world)
MPI_Group_translate_ranks(group_world,1,&taskid,group2,&rank)

GETSIZE


MPI/MPL Description
MPL mpc_getsize(&gsize,gid)
MPI MPI_Group_size(group,&gsize)

GETTASKID


MPI/MPL Description
MPL mpc_gettaskid(rank,&taskid,gid)
MPI
MPI_Comm_group(MPI_COMM_WORLD,&group_world)
MPI_Group_translate_ranks(group1,1,&rank,group_world,&taskid)

GROUP


MPI/MPL Description
MPL mpc_group(gsize,&glist,label,&gid)
MPI
MPI_Comm_group(MPI_COMM_WORLD,&group_world)
MPI_Group_incl(group_world,gsize,&glist,&gid)

PARTITION


MPI/MPL Description
MPL mpc_partition(parent_gid,key,label,&gid)
MPI MPI_Comm_split(comm,label,key,&newcomm)


Reduction Functions


MPL Function MPI Equivalent
i_vadd
Operator: MPI_SUM
Datatype: MPI_INT, MPI_INTEGER

s_vadd
Operator: MPI_SUM
Datatype: MPI_FLOAT, MPI_REAL

d_vadd
Operator: MPI_SUM
Datatype: MPI_DOUBLE, MPI_DOUBLE_PRECISION

i_vmul
Operator: MPI_PROD
Datatype: MPI_INT, MPI_INTEGER

s_vmul
Operator: MPI_PROD
Datatype: MPI_FLOAT, MPI_REAL

d_vmul
Operator: MPI_PROD
Datatype: MPI_DOUBLE, MPI_DOUBLE_PRECISION

i_vmax
Operator: MPI_MAX
Datatype: MPI_INT, MPI_INTEGER

s_vmax
Operator: MPI_MAX
Datatype: MPI_FLOAT, MPI_REAL

d_vmax
Operator: MPI_MAX
Datatype: MPI_DOUBLE, MPI_DOUBLE_PRECISION

i_vmin
Operator: MPI_MIN
Datatype: MPI_INT, MPI_INTEGER

s_vmin
Operator: MPI_MIN
Datatype: MPI_FLOAT, MPI_REAL

d_vmin
Operator: MPI_MIN
Datatype: MPI_DOUBLE, MPI_DOUBLE_PRECISION

b_vand
Operator: MPI_BAND
Datatype: MPI_BYTE

b_vor
Operator: MPI_BOR
Datatype: MPI_BYTE

b_vxor
Operator: MPI_BXOR
Datatype: MPI_BYTE

l_vand
Operator: MPI_LAND
Datatype: MPI_BYTE

l_vor
Operator: MPI_LOR
Datatype: MPI_BYTE

Note:The count parameter can be computed as follows:
MPI_Type_size(datatype,&size)
count = msglen/size;

User-Defined Reduction Functions


MPL/MPI Description
MPL
void func(&inbuf1,&inbuf2,&outbuf,&len)

Note that func is passed as an argument to the Collective Communication Library (CCL) function.

MPI
void func(&inbuf,&inoutbuf,&count,&datatype)
MPI_Op_create(func,commute,&op)

Note that op is passed as an argument to the CCL function.


Global Variables and Constants

Last Error Code


MPL/MPI Description
MPL mperrno
MPI No equivalent; error codes are returned by each function.

Wildcards


MPL Wildcard MPI Equivalent
ALLGRP ( 0) MPI_COMM_WORLD
DONTCARE (-1) MPI_ANY_SOURCE, MPI_ANY_TAG
ALLMSG (-2) no MPI equivalent - see mpc_wait
NULLTASK (-3) MPI_PROC_NULL


General Notes

This section provides some specific things to keep in mind when translating your program from MPL to MPI.

Task Indentifiers

In MPL, task identifiers such as src and dest are absolute task ids. In MPI, they are ranks within a communicator group. For the communicator MPI_COMM_WORLD, they are the same.

Message Length

Creating MPI Objects

MPI Objects should be created as follows:
Object C Fortran
Communicators MPI_Comm commid integer commid
Groups MPI_Group groupid integer groupid
Requests MPI_Request requestid integer reqestid
Reduction Ops MPI_Op opid integer opid
Error Handlers MPI_Errhandler handlerid integer handlerid
Data Types MPI_Datatype typeid integer typeid
Attribute Keys int keyid integer keyid
Status MPI_Status status integer status(MPI_STATUS_SIZE)

Using Wildcard Receives

For wildcard receives, MPL backstuffed the actual source and message type into the addresses of these parameters supplied with the receive call. In MPI, the actual values are returned in the status parameter, and may be retrieved as follows.

For programs written in C:

source = status.MPI_SOURCE;
tag    = status.MPI_TAG;

For programs written in Fortran:

source = status(MPI_SOURCE)
tag    = status(MPI_TAG)

Also note the following for C applications. In MPL, the source and type parameters were passed by reference, whereas in MPI, they are passed by value.

Reduction Functions

In MPI, user-defined reduction functions can be defined as commutative or non-commutative (see MPI_Op_create), whereas in MPL, all reduction functions were assumed to be commutative. Reduction functions must be associative in both MPL and MPI.

Error Handling

In MPL, C functions provided return codes that could be checked to determine if an error occurred, and Fortran functions printed error messages and terminated the job. In MPI, the default for both C and Fortran is to print a message and terminate the job. If return codes are desired, the error handler must be set as follows (per communicator):

 
   MPI_Errhandler_set(comm,MPI_ERRORS_RETURN);
 

In Fortran, error codes are returned in the last parameter of each function, ierror.

Also, IBM's MPI implementation provides a third predefined error handler, MPE_ERRORS_WARN, which prints a message and returns an error code without terminating the job. In DEVELOP mode, messages are always printed.

Mixing MPL and MPI Functions in the Same Application

MPL and MPI functions can be used in the same application (using the non-threaded library only), but the following rules must be followed:

The same DEVELOP MODE environment variable, MPI_EUIDEVELOP is used by MPL and MPI. If it is set to YES, then DEVELOP MODE is turned on for both MPL and MPI.

Before and After Using MPI Functions

All application programs that use MPI functions must call MPI_Init before calling any other MPI function (except MPI_Initialized). All applications that use MPI functions should call MPI_Finalize as the last MPI call they make. Failure to do this may make the application non-portable.

If an application makes no MPI calls, then it is not necessary for it to call MPI_Init or MPI_Finalize.

Using Message Passing Handlers

Only a subset of MPL message passing is allowed on handlers that are created by the MPL Receive and Call function (mpc_rcvncall or MP_RCVNCALL). MPI calls on these handlers are not supported.


[ Top of Page | Previous Page | Next Page | Table of Contents | Index ]