From 9b58d35185905f8334142bf4988cb784e993aea7 Mon Sep 17 00:00:00 2001 From: Timothy Pearson Date: Mon, 21 Nov 2011 02:23:03 -0600 Subject: Initial import of extracted KDE i18n tarballs --- .../docs/kdemultimedia/artsbuilder/mcop.docbook | 1899 ++++++++++++++++++++ 1 file changed, 1899 insertions(+) create mode 100644 tde-i18n-de/docs/kdemultimedia/artsbuilder/mcop.docbook (limited to 'tde-i18n-de/docs/kdemultimedia/artsbuilder/mcop.docbook') diff --git a/tde-i18n-de/docs/kdemultimedia/artsbuilder/mcop.docbook b/tde-i18n-de/docs/kdemultimedia/artsbuilder/mcop.docbook new file mode 100644 index 00000000000..2f7ef5de06d --- /dev/null +++ b/tde-i18n-de/docs/kdemultimedia/artsbuilder/mcop.docbook @@ -0,0 +1,1899 @@ + + + +&MCOP;: Objekt Modell und Streaming + + + +Überblick + + &MCOP; ist der Standard, den &arts; für folgende Aufgaben verwendet: + + Kommunikation zwischen Objekten. Netzwerk-Transparenz. Beschreibung von Objekt-Schnittstellen. Programmiersprachenunabhängigkeit. + + Ein wichtiger Aspekt von &MCOP; ist die Interface Description Language , &IDL;, in der viele der Schnittstellen (Interfaces) und APIs von &arts; sprachunabhängig definiert worden sind. + + &IDL;-Schnittstellen werden durch den &IDL;-Übersetzer in C++-Quelltexte übersetzt. Wenn Sie eine Schnittstelle implementieren, verwenden Sie als Grundlage das Klassenskelett, das der &IDL;-Übersetzer erstellt hat. Wenn Sie eine Schnittstelle verwenden, benutzen Sie einen Wrapper. Auf diese Weise kann &MCOP; ein Protokoll verwenden, wenn das Objekt, das Sie verwenden nicht lokal ist - Sie haben also volle Netzwerk-Transparenz. + + Dieses Kapitel soll die grundlegenden Möglichkeiten des Objektmodells beschreiben, das durch die Verwendung von &MCOP;, das Protokoll selbst und seine Verwendung in C++ (Sprachbindung) entsteht. Dieses Kapitel wird nicht übersetzt, da zur Programmierung ohnehin englische Sprachkenntnisse unabdingbar sind. + + + + + +Interfaces and &IDL; + + Many of the services provided by &arts;, such as modules and the sound server, are defined in terms of interfaces. Interfaces are specified in a programming language independent format: &IDL;. + + This allows many of the implementation details such as the format of multimedia data streams, network transparency, and programming language dependencies, to be hidden from the specification for the interface. A tool, &mcopidl;, translates the interface definition into a specific programming language (currently only C++ is supported). + + The tool generates a skeleton class with all of the boilerplate code and base functionality. You derive from that class to implement the features you want. + + The &IDL; used by &arts; is similar to that used by CORBA and DCOM. + + &IDL; files can contain: + + C-style #include directives for other &IDL; files. Definitions of enumerated and struct types, as in C/C++. Definitions of interfaces. + + In &IDL;, interfaces are defined much like a C++ class or C struct, albeit with some restrictions. Like C++, interfaces can subclass other interfaces using inheritance. Interface definitions can include three things: streams, attributes, and methods. + + + +Streams + + Streams define multimedia data, one of the most important components of a module. Streams are defined in the following format: + + [ async ] in|out [ multi ] type stream name [ , name ] ; + + Streams have a defined direction in reference to the module, as indicated by the required qualifiers in or out. The type argument defines the type of data, which can be any of the types described later for attributes (not all are currently supported). Many modules use the stream type audio, which is an alias for float since that is the internal data format used for audio stream. Multiple streams of the same type can defined in the same definition uisng comma separated names. + + Streams are by default synchronous, which means they are continuous flows of data at a constant rate, such as PCM audio. The async qualifier specifies an asynchronous stream, which is used for non-continuous data flows. The most common example of an async stream is &MIDI; messages. + + The multi keyword, only valid for input streams, indicates that the interface supports a variable number of inputs. This is useful for implementing devices such as mixers that can accept any number of input streams. + + + + +Attributes + + Attributes are data associated with an instance of an interface. They are declared like member variables in C++, and can can use any of the primitive types boolean, byte, long, string, or float. You can also use user-defined struct or enum types as well as variable sized sequences using the syntax sequence<type>. Attributes can optionally be marked readonly. + + + + +Methods + + As in C++, methods can be defined in interfaces. The method parameters are restricted to the same types as attributes. The keyword oneway indicates a method which returns immediately and is executed asynchronously. + + + + + +Standard Interfaces + + Several standard module interfaces are already defined for you in &arts;, such as StereoEffect, and SimpleSoundServer. + + + + +Example + + A simple example of a module taken from &arts; is the constant delay module, found in the file kdemultimedia/arts/modules/artsmodules.idl. The interface definition is listed below. + + +interface Synth_CDELAY : SynthModule { + attribute float time; + in audio stream invalue; + out audio stream outvalue; +}; + + + This modules inherits from SynthModule. That interface, defined in artsflow.idl, defines the standard methods implemented in all music synthesizer modules. + + The CDELAY effect delays a stereo audio stream by the time value specified as a floating point parameter. The interface definition has an attribute of type float to store the delay value. It defines two input audio streams and two output audio streams (typical of stereo effects). No methods are required other than those it inherits. + + + + + + +More About Streams + + This section covers some additional topics related to streams. + + +Stream Types + + There are various requirements for how a module can do streaming. To illustrate this, consider these examples: + + Scaling a signal by a factor of two. Performing sample frequency conversion. Decompressing a run-length encoded signal. Reading &MIDI; events from /dev/midi00 and inserting them into a stream. + + The first case is the simplest: upon receiving 200 samples of input the module produces 200 samples of output. It only produces output when it gets input. + + The second case produces different numbers of output samples when given 200 input samples. It depends what conversion is performed, but the number is known in advance. + + The third case is even worse. From the outset you cannot even guess how much data 200 input bytes will generate (probably a lot more than 200 bytes, but...). + + The last case is a module which becomes active by itself, and sometimes produces data. + + In &arts;s-0.3.4, only streams of the first type were handled, and most things worked nicely. This is probably what you need most when writing modules that process audio. The problem with the other, more complex types of streaming, is that they are hard to program, and that you don't need the features most of the time. That is why we do this with two different stream types: synchronous and asynchronous. + + Synchronous streams have these characteristics: + + Modules must be able to calculate data of any length, given enough input. All streams have the same sampling rate. The calculateBlock() function will be called when enough data is available, and the module can rely on the pointers pointing to data. There is no allocation and deallocation to be done. + + Asynchronous streams, on the other hand, have this behaviour: + + Modules may produce data sometimes, or with varying sampling rate, or only if they have input from some filed escriptor. They are not bound by the rule must be able to satisfy requests of any size. Asynchronous streams of a module may have entirely different sampling rates. Outgoing streams: there are explicit functions to allocate packets, to send packets - and an optional polling mechanism that will tell you when you should create some more data. Incoming streams: you get a call when you receive a new packet - you have to say when you are through with processing all data of that packet, which must not happen at once (you can say that anytime later, and if everybody has processed a packet, it will be freed/reused) + + When you declare streams, you use the keyword async to indicate you want to make an asynchronous stream. So, for instance, assume you want to convert an asynchronous stream of bytes into a synchronous stream of samples. Your interface could look like this: + + +interface ByteStreamToAudio : SynthModule { + async in byte stream indata; // the asynchonous input sample stream + + out audio stream left,right; // the synchronous output sample streams +}; + + + + + +Using Asynchronous Streams + + Suppose you decided to write a module to produce sound asynchronously. Its interface could look like this: + + +interface SomeModule : SynthModule +{ + async out byte stream outdata; +}; + + + How do you send the data? The first method is called push delivery. With asynchronous streams you send the data as packets. That means you send individual packets with bytes as in the above example. The actual process is: allocate a packet, fill it, send it. + + Here it is in terms of code. First we allocate a packet: + + +DataPacket<mcopbyte> *packet = outdata.allocPacket(100); + + + The we fill it: + + +// cast so that fgets is happy that it has a (char *) pointer +char *data = (char *)packet->contents; + +// as you can see, you can shrink the packet size after allocation +// if you like +if(fgets(data,100,stdin)) + packet->size = strlen(data); +else + packet->size = 0; + + + Now we send it: + + +packet->send(); + + + This is quite simple, but if we want to send packets exactly as fast as the receiver can process them, we need another approach, the pull delivery method. You ask to send packets as fast as the receiver is ready to process them. You start with a certain amount of packets you send. As the receiver processes one packet after another, you start refilling them with fresh data, and send them again. + + You start that by calling setPull. For example: + + +outdata.setPull(8, 1024); + + + This means that you want to send packets over outdata. You want to start sending 8 packets at once, and as the receiver processes some of them, you want to refill them. + + Then, you need to implement a method which fills the packets, which could look like this: + + +void request_outdata(DataPacket<mcopbyte> *packet) +{ + packet->size = 1024; // shouldn't be more than 1024 + for(int i = 0;i < 1024; i++) + packet->contents[i] = (mcopbyte)'A'; + packet->send(); +} + + + Thats it. When you don't have any data any more, you can start sending packets with zero size, which will stop the pulling. + + Note that it is essential to give the method the exact name request_streamname. + + We just discussed sending data. Receiving data is much much simpler. Suppose you have a simple ToLower filter, which simply converts all letters in lowercase: + + +interface ToLower { + async in byte stream indata; + async out byte stream outdata; +}; + + + This is really simple to implement; here is the whole implementation: + + +class ToLower_impl : public ToLower_skel { +public: + void process_indata(DataPacket<mcopbyte> *inpacket) + { + DataPacket<mcopbyte> *outpacket = outdata.allocPacket(inpacket->size); + + // convert to lowercase letters + char *instring = (char *)inpacket->contents; + char *outstring = (char *)outpacket->contents; + + for(int i=0;i<inpacket->size;i++) + outstring[i] = tolower(instring[i]); + + inpacket->processed(); + outpacket->send(); + } +}; + +REGISTER_IMPLEMENTATION(ToLower_impl); + + + Again, it is essential to name the method process_streamname. + + As you see, for each arriving packet you get a call for a function (the process_indata call in our case). You need to call the processed() method of a packet to indicate you have processed it. + + Here is an implenetation tip: if processing takes longer (&dh; if you need to wait for soundcard output or something like that), don't call processed immediately, but store the whole data packet and call processed only as soon as you really processed that packet. That way, senders have a chance to know how long it really takes to do your work. + + As synchronization isn't so nice with asynchronous streams, you should use synchronous streams wherever possible, and asynchronous streams only when necessary. + + + + +Default Streams + + Suppose you have 2 objects, for example an AudioProducer and an AudioConsumer. The AudioProducer has an output stream and AudioConsumer has an input one. Each time you want to connect them, you will use those 2 streams. The first use of defaulting is to enable you to make the connection without specifying the ports in that case. + + Now suppose the teo objects above can handle stereo, and each have a left and right port. You'd still like to connect them as easily as before. But how can the connecting system know which output port to connect to which input port? It has no way to correctly map the streams. Defaulting is then used to specify several streams, with an order. Thus, when you connect an object with 2 default output streams to another one with 2 default input streams, you don't have to specify the ports, and the mapping will be done correctly. + + Of course, this is not limited to stereo. Any number of streams can be made default if needed, and the connect function will check that the number of defaults for 2 object match (in the required direction) if you don't specify the ports to use. + + The syntax is as follows: in the &IDL;, you can use the default keyword in the stream declaration, or on a single line. For example: + + +interface TwoToOneMixer { + default in audio stream input1, input2; + out audio stream output; +}; + + + In this example, the object will expect its two input ports to be connected by default. The order is the one specified on the default line, so an object like this one: + + +interface DualNoiseGenerator { + out audio stream bzzt, couic; + default couic, bzzt; +}; + + + Will make connections from couic to input1, and bzzt to input2 automatically. Note that since there is only one output for the mixer, it will be made default in this case (see below). The syntax used in the noise generator is useful to declare a different order than the declaration, or selecting only a few ports as default. The directions of the ports on this line will be looked up by &mcopidl;, so don't specify them. You can even mix input and output ports in such a line, only the order is important. + + There are some rules that are followed when using inheritance: + + If a default list is specified in the &IDL;, then use it. Parent ports can be put in this list as well, whether they were default in the parent or not. Otherwise, inherit parent's defaults. Ordering is parent1 default1, parent1 default2..., parent2 default1... If there is a common ancestor using 2 parent branches, a virtual public-like merging is done at that default's first occurrence in the list. If there is still no default and a single stream in a direction, use it as default for that direction. + + + + + +Attribute change notifications + + + + Attribute change notifications are a way to know when an attribute changed. They are a bit comparable with &Qt;'s or Gtk's signals and slots. For instance, if you have a &GUI; element, a slider, which configures a number between 0 and 100, you will usually have an object that does something with that number (for instance, it might be controlling the volume of some audio signal). So you would like that whenever the slider is moved, the object which scales the volume gets notified. A connection between a sender and a receiver. + + &MCOP; deals with that by being able to providing notifications when attributes change. Whatever is declared as attribute in the &IDL;, can emit such change notifications, and should do so, whenever it is modified. Whatever is declared as attribute can also receive such change notifications. So for instance if you had two &IDL; interfaces, like these: + + + interface Slider { + attribute long min,max; + attribute long position; + }; + interface VolumeControl : Arts::StereoEffect { + attribute long volume; // 0..100 + }; + + + You can connect them using change notifications. It works using the normal flowsystem connect operation. In this case, the C++ code to connect two objects would look like this: + + +#include <connect.h> +using namespace Arts; +[...] +connect(slider,"position_changed",volumeControl,"volume"); + + + As you see, each attribute offers two different streams, one for sending the change notifications, called attributename_changed, and one for receiving change notifications, called attributename. + + It is important to know that change notifications and asynchronous streams are compatible. They are also network transparent. So you can connect a change notification of a float attribute of a &GUI; widget has to an asynchronous stream of a synthesis module running on another computer. This of course also implies that change notifications are not synchronous, this means, that after you have sent the change notification, it may take some time until it really gets received. + + + +Sending change notifications + + When implementing objects that have attributes, you need to send change notifications whereever an attribute changes. The code for doing this looks like this: + + + void KPoti_impl::value(float newValue) + { + if(newValue != _value) + { + _value = newValue; + value_changed(newValue); // <- send change notification + } + } + + + It is strongly recommended to use code like this for all objects you implement, so that change notifications can be used by other people. You should however void sending notifications too often, so if you are doing signal processing, it is probably the best if you keep track when you sent your last notification, so that you don't send one with every sample you process. + + + + +Applications for change notifications + + It will be especially useful to use change notifications in conjunction with scopes (things that visualize audio data for instance), gui elements, control widgets, and monitoring. Code using this is in kdelibs/arts/tests, and in the experimental artsgui implementation, which you can find under kdemultimedia/arts/gui. + + + + + + + + + + +The <literal role="extension" +>.mcoprc</literal +> file + + The .mcoprc file (in each user's home directory) can be used to configure &MCOP; in some ways. Currently, the following is possible: + + GlobalComm The name of an interface to be used for global communication. Global communication is used to find other objects and obtain the secret cookie. Multiple &MCOP; clients/servers that should be able to talk to each other need to have a GlobalComm object which is able to share information between them. Currently, the possible values are Arts::TmpGlobalComm to communicate via /tmp/mcop-username directory (which will only work on the local computer) and Arts::X11GlobalComm to communicate via the root window properties on the X11 server. TraderPath Specifies where to look for trader information. You can list more than one directory here, and separate them with commas, like ExtensionPath Specifies from which directories extensions (in the form of shared libraries) are loaded. Multiple values can be specified comma seperated. + + An example which uses all of the above is: + + +# $HOME/.mcoprc file +GlobalComm=Arts::X11GlobalComm + +# if you are a developer, it might be handy to add a directory in your home +# to the trader/extension path to be able to add components without +# installing them +TraderPath="/opt/kde2/lib/mcop","/home/joe/mcopdevel/mcop" +ExtensionPath="/opt/kde2/lib","/home/joe/mcopdevel/lib" + + + + + +&MCOP; for <acronym +>CORBA</acronym +> Users + + If you have used CORBA before, you will see that &MCOP; is much the same thing. In fact, &arts; prior to version 0.4 used CORBA. + + The basic idea of CORBA is the same: you implement objects (components). By using the &MCOP; features, your objects are not only available as normal classes from the same process (via standard C++ techniques) - they also are available to remote servers transparently. For this to work, the first thing you need to do is to specify the interface of your objects in an &IDL; file - just like CORBA &IDL;. There are only a few differences. + + +<acronym +>CORBA</acronym +> Features That Are Missing In &MCOP; + + In &MCOP; there are no in and out parameters on method invocations. Parameters are always incoming, the return code is always outgoing, which means that the interface: + + +// CORBA idl +interface Account { + void deposit( in long amount ); + void withdraw( in long amount ); + long balance(); +}; + + + is written as + + +// MCOP idl +interface Account { + void deposit( long amount ); + void withdraw( long amount ); + long balance(); +}; + + + in &MCOP;. + + There is no exception support. &MCOP; doesn't have exceptions - it uses something else for error handling. + + There are no union types and no typedefs. I don't know if that is a real weakness, something one would desperately need to survive. + + There is no support for passing interfaces or object references + + + + +<acronym +>CORBA</acronym +> Features That Are Different In &MCOP; + + You declare sequences as sequencetype in &MCOP;. There is no need for a typedef. For example, instead of: + + +// CORBA idl +struct Line { + long x1,y1,x2,y2; +}; +typedef sequence<Line> LineSeq; +interface Plotter { + void draw(in LineSeq lines); +}; + + + you would write + + +// MCOP idl +struct Line { + long x1,y1,x2,y2; +}; +interface Plotter { + void draw(sequence<Line> lines); +}; + + + + + +&MCOP; Features That Are Not In <acronym +>CORBA</acronym +> + + You can declare streams, which will then be evaluated by the &arts; framework. Streams are declared in a similar manner to attributes. For example: + + +// MCOP idl +interface Synth_ADD : SynthModule { + in audio stream signal1,signal2; + out audio stream outvalue; +}; + + + This says that your object will accept two incoming synchronous audio streams called signal1 and signal2. Synchronous means that these are streams that deliver x samples per second (or other time), so that the scheduler will guarantee to always provide you a balanced amount of input data (&zb; 200 samples of signal1 are there and 200 samples signal2 are there). You guarantee that if your object is called with those 200 samples signal1 + signal2, it is able to produce exactly 200 samples to outvalue. + + + + +The &MCOP; C++ Language Binding + + This differs from CORBA mostly: + + Strings use the C++ STL string class. When stored in sequences, they are stored plain, that means they are considered to be a primitive type. Thus, they need copying. longs are plain long's (expected to be 32 bit). Sequences use the C++ STL vector class. Structures are all derived from the &MCOP; class Type, and generated by the &MCOP; &IDL; compiler. When stored in sequences, they are not stored plain , but as pointers, as otherwise, too much copying would occur. + + + +Implementing &MCOP; Objects + + After having them passed through the &IDL; compiler, you need to derive from the _skel class. For instance, consider you have defined your interface like this: + + +// MCOP idl: hello.idl +interface Hello { + void hello(string s); + string concat(string s1, string s2); + long sum2(long a, long b); +}; + + + You pass that through the &IDL; compiler by calling mcopidl hello.idl, which will in turn generate hello.cc and hello.h. To implement it, you need to define a C++-class that inherits the skeleton: + + +// C++ header file - include hello.h somewhere +class Hello_impl : virtual public Hello_skel { +public: + void hello(const string& s); + string concat(const string& s1, const string& s2); + long sum2(long a, long b); +}; + + + Finally, you need to implement the methods as normal C++ + + +// C++ implementation file + +// as you see string's are passed as const string references +void Hello_impl::hello(const string& s) +{ + printf("Hello '%s'!\n",s.c_str()); +} + +// when they are a returncode they are passed as "normal" strings +string Hello_impl::concat(const string& s1, const string& s2) +{ + return s1+s2; +} + +long Hello_impl::sum2(long a, long b) +{ + return a+b; +} + + + Once you do that, you have an object which can communicate using &MCOP;. Just create one (using the normal C++ facilities to create an object): + + + Hello_impl server; + + + And as soon as you give somebody the reference + + + string reference = server._toString(); + printf("%s\n",reference.c_str()); + + + and go to the &MCOP; idle loop + + +Dispatcher::the()->run(); + + + People can access the thing using + + +// this code can run anywhere - not necessarily in the same process +// (it may also run on a different computer/architecture) + + Hello *h = Hello::_fromString([the object reference printed above]); + + + and invoke methods: + + + if(h) + h->hello("test"); + else + printf("Access failed?\n"); + + + + + + +&MCOP; Security Considerations + + Since &MCOP; servers will listen on a TCP port, potentially everybody (if you are on the Internet) may try to connect &MCOP; services. Thus, it is important to authenticate clients. &MCOP; uses the md5-auth protocol. + + The md5-auth protocol does the following to ensure that only selected (trusted) clients may connect to a server: + + It assumes you can give every client a secret cookie. Every time a client connects, it verifies that this client knows that secret cookie, without actually transferring it (not even in a form that somebody listening to the network traffic could find it out). + + To give each client the secret cookie, &MCOP; will (normally) put it in the mcop directory (under /tmp/mcop-USER/secret-cookie). Of course, you can copy it to other computers. However, if you do so, use a secure transfer mechanism, such as scp (from ssh). + + The authentication of clients uses the following steps: + + [SERVER] generate a new (random) cookie R [SERVER] send it to the client [CLIENT] read the "secret cookie" S from a file [CLIENT] mangle the cookies R and S to a mangled cookie M using the MD5 algorithm [CLIENT] send M to the server [SERVER] verify that mangling R and S gives just the same thing as the cookie M received from the client. If yes, authentication is successful. + + This algorithm should be secure, given that + + The secret cookies and random cookies are random enough and The MD5 hashing algorithm doesn't allow to find out the original text, that is the secret cookie S and the random cookie R (which is known, anyway), from the mangled cookie M. + + The &MCOP; protocol will start every new connection with an authentication process. Basically, it looks like this: + + Server sends a ServerHello message, which describes the known authentication protocols. Client sends a ClientHello message, which includes authentication info. Server sends an AuthAccept message. + + To see that the security actually works, we should look at how messages are processed on unauthenticated connections: + + Before the authentication succeeds, the server will not receive other messages from the connection. Instead, if the server for instance expects a ClientHello message, and gets an mcopInvocation message, it will drop the connection. If the client doesn't send a valid &MCOP; message at all (no &MCOP; magic in the message header) in the authentication phase, but something else, the connection is dropped. If the client tries to send a very very large message (> 4096 bytes in the authentication phase, the message size is truncated to 0 bytes, which will cause that it isn't accepted for authentication) This is to prevent unauthenticated clients from sending &zb; 100 megabytes of message, which would be received and could cause the server to run out of memory. If the client sends a corrupt ClientHello message (one, for which demarshalling fails), the connection is dropped. If the client send nothing at all, then a timeout should occur (to be implemented). + + + + +&MCOP; Protocol Specification + + +Einleitung + + It has conceptual similarities to CORBA, but it is intended to extend it in all ways that are required for real time multimedia operations. + + It provides a multimedia object model, which can be used for both: communication between components in one adress space (one process), and between components that are in different threads, processes or on different hosts. + + All in all, it will be designed for extremely high performance (so everything shall be optimized to be blazingly fast), suitable for very communicative multimedia applications. For instance streaming videos around is one of the applications of &MCOP;, where most CORBA implementations would go down to their knees. + + The interface definitions can handle the following natively: + + Continous streams of data (such as audio data). Event streams of data (such as &MIDI; events). Real reference counting. + + and the most important CORBA gimmicks, like + + Synchronous method invocations. Asynchronous method invocations. Constructing user defined data types. Multiple inheritance. Passing object references. + + + + +The &MCOP; Message Marshalling + + Design goals/ideas: + + Marshalling should be easy to implement. Demarshalling requires the receiver to know what type he wants to demarshall. The receiver is expected to use every information - so skipping is only in the protocol to a degree that: If you know you are going to receive a block of bytes, you don't need to look at each byte for an end marker. If you know you are going to receive a string, you don't need to read it until the zero byte to find out it's length while demarshalling, however, If you know you are going to receive a sequence of strings, you need to look at the length of each of them to find the end of the sequence, as strings have variable length. But if you use the strings for something useful, you'll need to do that anyway, so this is no loss. As little overhead as possible. + + + + Marshalling of the different types is show in the table below: + + + + Type Marshalling Process Result + + void void types are marshalled by omitting them, so nothing is written to the stream for them. long is marshalled as four bytes, the most significant byte first, so the number 10001025 (which is 0x989a81) would be marshalled as: 0x00 0x98 0x9a 0x81 enums are marshalled like longs byte is marshalled as a single byte, so the byte 0x42 would be marshalled as: 0x42 string is marshalled as a long, containing the length of the following string, and then the sequence of characters strings must end with one zero byte (which is included in the length counting). include the trailing 0 byte in length counting! hello would be marshalled as: 0x00 0x00 0x00 0x06 0x68 0x65 0x6c 0x6c 0x6f 0x00 boolean is marshalled as a byte, containing 0 if false or 1 if true, so the boolean value true is marshalled as: 0x01 float is marshalled after the four byte IEEE754 representation - detailed docs how IEEE works are here: http://twister.ou.edu/workshop.docs/common-tools/numerical_comp_guide/ncg_math.doc.html and here: http://java.sun.com/docs/books/vmspec/2nd-edition/html/Overview.doc.html. So, the value 2.15 would be marshalled as: 0x9a 0x99 0x09 0x40 struct A structure is marshalled by marshalling it's contents. There are no additional prefixes or suffixes required, so the structure +struct test { + string name; // which is "hello" + long value; // which is 10001025 (0x989a81) +}; + would be marshalled as +0x00 0x00 0x00 0x06 0x68 0x65 0x6c 0x6c +0x6f 0x00 0x00 0x98 0x9a 0x81 + sequence a sequence is marshalled by listing the number of elements that follow, and then marshalling the elements one by one. So a sequence of 3 longs a, with a[0] = 0x12345678, a[1] = 0x01 and a[2] = 0x42 would be marshalled as: +0x00 0x00 0x00 0x03 0x12 0x34 0x56 0x78 +0x00 0x00 0x00 0x01 0x00 0x00 0x00 0x42 + + + + + If you need to refer to a type, all primitive types are referred by the names given above. Structures and enums get own names (like Header). Sequences are referred as *normal type, so that a sequence of longs is *long and a sequence of Header struct's is *Header. + + + + +Messages + + The &MCOP; message header format is defined as defined by this structure: + + +struct Header { + long magic; // the value 0x4d434f50, which is marshalled as MCOP + long messageLength; + long messageType; +}; + + + The possible messageTypes are currently + + + mcopServerHello = 1 + mcopClientHello = 2 + mcopAuthAccept = 3 + mcopInvocation = 4 + mcopReturn = 5 + mcopOnewayInvocation = 6 + + + A few notes about the &MCOP; messaging: + + + Every message starts with a Header. Some messages types should be dropped by the server, as long as the authentication is not complete. After receiving the header, the protocol (connection) handling can receive the message completely, without looking at the contents. + + The messageLength in the header is of course in some cases redundant, which means that this approach is not minimal regarding the number of bytes. + + However, it leads to an easy (and fast) implementation of non-blocking messaging processing. With the help of the header, the messages can be received by protocol handling classes in the background (non-blocking), if there are many connections to the server, all of them can be served parallel. You don't need to look at the message content, to receive the message (and to determine when you are done), just at the header, so the code for that is pretty easy. + + Once a message is there, it can be demarshalled and processed in one single pass, without caring about cases where not all data may have been received (because the messageLength guarantees that everything is there). + + + + +Invocations + + To call a remote method, you need to send the following structure in the body of an &MCOP; message with the messageType = 1 (mcopInvocation): + + +struct Invocation { + long objectID; + long methodID; + long requestID; +}; + + + after that, you send the parameters as structure, &zb; if you invoke the method string concat(string s1, string s2), you send a structure like + + +struct InvocationBody { + string s1; + string s2; +}; + + + + if the method was declared to be oneway - that means asynchronous without return code - then that was it. Otherwise, you'll receive as answer the message with messageType = 2 (mcopReturn) + + +struct ReturnCode { + long requestID; + <resulttype> result; +}; + + + + where <resulttype> is the type of the result. As void types are omitted in marshalling, you can also only write the requestID if you return from a void method. + + So our string concat(string s1, string s2) would lead to a returncode like + + +struct ReturnCode { + long requestID; + string result; +}; + + + + + +Inspecting Interfaces + + To do invocations, you need to know the methods an object supports. To do so, the methodID 0, 1, 2 and 3 are hardwired to certain functionalities. That is + + +long _lookupMethod(MethodDef methodDef); // methodID always 0 +string _interfaceName(); // methodID always 1 +InterfaceDef _queryInterface(string name); // methodID always 2 +TypeDef _queryType(string name); // methodID always 3 + + + to read that, you of course need also + + +struct MethodDef { + string methodName; + string type; + long flags; // set to 0 for now (will be required for streaming) + sequence<ParamDef> signature; +}; + +struct ParamDef { + string name; + long typeCode; +}; + + + the parameters field contains type components which specify the types of the parameters. The type of the returncode is specified in the MethodDef's type field. + + Strictly speaking, only the methods _lookupMethod() and _interfaceName() differ from object to object, while the _queryInterface() and _queryType() are always the same. + + What are those methodIDs? If you do an &MCOP; invocation, you are expected to pass a number for the method you are calling. The reason for that is, that numbers can be processed much faster than strings when executing an &MCOP; request. + + So how do you get those numbers? If you know the signature of the method, that is a MethodDef that describes the method, (which contains name, type, parameter names, parameter types and such), you can pass that to _lookupMethod of the object where you wish to call a method. As _lookupMethod is hardwired to methodID 0, you should encounter no problems doing so. + + On the other hand, if you don't know the method signature, you can find which methods are supported by using _interfaceName, _queryInterface and _queryType. + + + +Type Definitions + + User defined datatypes are described using the TypeDef structure: + + +struct TypeComponent { + string type; + string name; +}; + +struct TypeDef { + string name; + + sequence<TypeComponent> contents; +}; + + + + + + +Why &arts; Doesn't Use &DCOP; + + Since &kde; dropped CORBA completely, and is using &DCOP; everywhere instead, naturally the question arises why &arts; isn't doing so. After all, &DCOP; support is in KApplication, is well-maintained, supposed to integrate greatly with libICE, and whatever else. + + Since there will be (potentially) a lot of people asking whether having &MCOP; besides &DCOP; is really necessary, here is the answer. Please don't get me wrong, I am not trying to say &DCOP; is bad. I am just trying to say &DCOP; isn't the right solution for &arts; (while it is a nice solution for other things). + + First, you need to understand what exactly &DCOP; was written for. Created in two days during the &kde;-TWO meeting, it was intended to be as simple as possible, a really lightweight communication protocol. Especially the implementation left away everything that could involve complexity, for instance a full blown concept how data types shall be marshalled. + + Even although &DCOP; doesn't care about certain things (like: how do I send a string in a network-transparent manner?) - this needs to be done. So, everything that &DCOP; doesn't do, is left to &Qt; in the &kde; apps that use &DCOP; today. This is mostly type management (using the &Qt; serialization operator). + + So &DCOP; is a minimal protocol which perfectly enables &kde; applications to send simple messages like open a window pointing to http://www.kde.org or your configuration data has changed. However, inside &arts; the focus lies on other things. + + The idea is, that little plugins in &arts; will talk involving such data structures as midi events and songposition pointers and flow graphs. + + These are complex data types, which must be sent between different objects, and be passed as streams, or parameters. &MCOP; supplies a type concept, to define complex data types out of simpler ones (similar to structs or arrays in C++). &DCOP; doesn't care about types at all, so this problem would be left to the programmer - like: writing C++ classes for the types, and make sure they can serialize properly (for instance: support the &Qt; streaming operator). + + But that way, they would be inaccessible to everything but direct C++ coding. Specifically, you could not design a scripting language, that would know all types plugins may ever expose, as they are not self describing. + + Much the same argument is valid for interfaces as well. &DCOP; objects don't expose their relationships, inheritance hierarchies, etc. - if you were to write an object browser which shows you what attributes has this object got, you'd fail. + + + While Matthias told me that you have a special function functions on each object that tells you about the methods that an object supports, this leaves out things like attributes (properties), streams and inheritance relations. + + This seriously breaks applications like &arts-builder;. But remember: &DCOP; was not so much intended to be an object model (as &Qt; already has one with moc and similar), nor to be something like CORBA, but to supply inter-application communication. + + Why &MCOP; even exists is: it should work fine with streams between objects. &arts; makes heavily use of small plugins, which interconnect themselves with streams. The CORBA version of &arts; had to introduce a very annoying split between the SynthModule objects, which were the internal work modules that did do the streaming, and the CORBA interface, which was something external. + + Much code cared about making interaction between the SynthModule objects and the CORBA interface look natural, but it didn't, because CORBA knew nothing at all about streams. &MCOP; does. Look at the code (something like simplesoundserver_impl.cc). Way better! Streams can be declared in the interface of modules, and implemented in a natural looking way. + + One can't deny it. One of the reasons why I wrote &MCOP; was speed. Here are some arguments why &MCOP; will definitely be faster than &DCOP; (even without giving figures). + + + An invocation in &MCOP; will have a six-long-header. That is: + + magic MCOP message type (invocation) size of the request in bytes request ID target object ID target method ID + + After that, the parameters follow. Note that the demarshalling of this is extremely fast. You can use table lookups to find the object and the method demarshalling function, which means that complexity is O(1) [ it will take the same amount of time, no matter how many objects are alive, or how many functions are there ]. + + Comparing this to &DCOP;, you'll see, that there are at least + + a string for the target object - something like myCalculator a string like addNumber(int,int) to specify the method several more protocol info added by libICE, and other DCOP specifics I don't know + + These are much more painful to demarshall, as you'll need to parse the string, search for the function, &etc;. + + In &DCOP;, all requests are running through a server (DCOPServer). That means, the process of a synchronous invocation looks like this: + + Client process sends invocation. DCOPserver (man-in-the-middle) receives invocation and looks where it needs to go, and sends it to the real server. Server process receives invocation, performs request and sends result. DCOPserver (man-in-the-middle) receives result and ... sends it to the client. Client decodes reply. + + In &MCOP;, the same invocation looks like this: + + Client process sends invocation. Server process receives invocation, performs request and sends result. Client decodes reply. + + Say both were implemented correctly, &MCOP;s peer-to-peer strategy should be faster by a factor of two, than &DCOP;s man-in-the-middle strategy. Note however that there were of course reasons to choose the &DCOP; strategy, which is namely: if you have 20 applications running, and each app is talking to each app, you need 20 connections in &DCOP;, and 200 with &MCOP;. However in the multimedia case, this is not supposed to be the usual setting. + + I tried to compare &MCOP; and &DCOP;, doing an invocation like adding two numbers. I modified testdcop to achieve this. However, the test may not have been precise on the &DCOP; side. I invoked the method in the same process that did the call for &DCOP;, and I didn't know how to get rid of one debugging message, so I used output redirection. + + The test only used one object and one function, expect &DCOP;s results to decrease with more objects and functions, while &MCOP;s results should stay the same. Also, the dcopserver process wasn't connected to other applications, it might be that if many applications are connected, the routing performance decreases. + + The result I got was that while &DCOP; got slightly more than 2000 invocations per second, &MCOP; got slightly more than 8000 invocations per second. That makes a factor of 4. I know that &MCOP; isn't tuned to the maximum possible, yet. (Comparision: CORBA, as implemented with mico, does something between 1000 and 1500 invocations per second). + + If you want harder data, consider writing some small benchmark app for &DCOP; and send it to me. + + CORBA had the nice feature that you could use objects you implemented once, as seperate server process, or as library. You could use the same code to do so, and CORBA would transparently descide what to do. With &DCOP;, that is not really intended, and as far as I know not really possible. + + &MCOP; on the other hand should support that from the beginning. So you can run an effect inside &artsd;. But if you are a wave editor, you can choose to run the same effect inside your process space as well. + + While &DCOP; is mostly a way to communicate between apps, &MCOP; is also a way to communicate inside apps. Especially for multimedia streaming, this is important (as you can run multiple &MCOP; objects parallely, to solve a multimedia task in your application). + + Although &MCOP; does not currently do so, the possibilities are open to implement quality of service features. Something like that &MIDI; event is really really important, compared to this invocation. Or something like needs to be there in time. + + On the other hand, stream transfer can be integrated in the &MCOP; protocol nicely, and combined with QoS stuff. Given that the protocol may be changed, &MCOP; stream transfer should not really get slower than conventional TCP streaming, but: it will be easier and more consistent to use. + + There is no need to base a middleware for multimedia on &Qt;. Deciding so, and using all that nice &Qt;-streaming and stuff, will easily lead to the middleware becoming a &Qt;-only (or rather &kde;-only) thing. I mean: as soon as I'll see the GNOMEs using &DCOP;, too, or something like that, I am certainly proven wrong. + + While I do know that &DCOP; basically doesn't know about the data types it sends, so that you could use &DCOP; without using &Qt;, look at how it is used in daily &kde; usage: people send types like QString, QRect, QPixmap, QCString, ..., around. These use &Qt;-serialization. So if somebody choose to support &DCOP; in a GNOME program, he would either have to claim to use QString,... types (although he doesn't do so), and emulate the way &Qt; does the streaming, or he would send other string, pixmap and rect types around, and thus not be interoperable. + + Well, whatever. &arts; was always intended to work with or without &kde;, with or without &Qt;, with or without X11, and maybe even with or without &Linux; (and I have even no problems with people who port it to a popular non-free operating systems). + + It is my position that non-&GUI;-components should be written non-&GUI;-dependant, to make sharing those among wider amounts of developers (and users) possible. + + I see that using two IPC protocols may cause inconveniences. Even more, if they are both non-standard. However, for the reasons given above, switching to &DCOP; is no option. If there is significant interest to find a way to unite the two, okay, we can try. We could even try to make &MCOP; speak IIOP, then we'd have a CORBA ORB ;). + + I talked with Matthias Ettrich a bit about the future of the two protocols, and we found lots of ways how things could go on. For instance, &MCOP; could handle the message communication in &DCOP;, thus bringing the protocols a bit closer together. + + So some possible solutions would be: + + Write an &MCOP; - &DCOP; gateway (which should be possible, and would make interoperation possible) - note: there is an experimental prototype, if you like to work on that. Integrate everything &DCOP; users expect into &MCOP;, and try to only do &MCOP; - one could add an man-in-the-middle-option to &MCOP;, too ;) Base &DCOP; on &MCOP; instead of libICE, and slowly start integrating things closer together. + + However, it may not be the worst possibility to use each protocol for everything it was intended for (there are some big differences in the design goals), and don't try to merge them into one. + + + + + + -- cgit v1.2.1