cdocutils.nodes document q)q}q(U nametypesq}q(XopacityqNX serializingqNXguidqXogc wktq X immutabilityq NX3X authorityq NX1X uniquenessq NX2X5X4X granularityq NXidentifiers in dataoneqNX resolvabilityqNX structureqNuUsubstitution_defsq}qUparse_messagesq]qUcurrent_sourceqNU decorationqNUautofootnote_startqKUnameidsq}q(hUopacityqhU serializingqhUguidqh Uogc-wktqh U immutabilityqX3Uid9qh U authorityq X1Uid7q!h U uniquenessq"X2Uid8q#X5Uid11q$X4Uid10q%h U granularityq&hUidentifiers-in-dataoneq'hU resolvabilityq(hU structureq)uUchildrenq*]q+cdocutils.nodes section q,)q-}q.(U rawsourceq/UUparentq0hUsourceq1X`/var/lib/jenkins/jobs/API_Documentation_trunk/workspace/api-documentation/source/design/PIDs.txtq2Utagnameq3Usectionq4U attributesq5}q6(Udupnamesq7]Uclassesq8]Ubackrefsq9]Uidsq:]q;h'aUnamesq<]q=hauUlineq>KUdocumentq?hh*]q@(cdocutils.nodes title qA)qB}qC(h/XIdentifiers in DataONEqDh0h-h1h2h3UtitleqEh5}qF(h7]h8]h9]h:]h<]uh>Kh?hh*]qGcdocutils.nodes Text qHXIdentifiers in DataONEqIqJ}qK(h/hDh0hBubaubcdocutils.nodes paragraph qL)qM}qN(h/XpIdentifiers (PIDs, Persistent IDentifiers) are handles that uniquely identify objects within the DataONE system.qOh0h-h1h2h3U paragraphqPh5}qQ(h7]h8]h9]h:]h<]uh>Kh?hh*]qRhHXpIdentifiers (PIDs, Persistent IDentifiers) are handles that uniquely identify objects within the DataONE system.qSqT}qU(h/hOh0hMubaubcdocutils.nodes bullet_list qV)qW}qX(h/Uh0h-h1h2h3U bullet_listqYh5}qZ(Ubulletq[X*h:]h9]h7]h8]h<]uh>Kh?hh*]q\(cdocutils.nodes list_item q])q^}q_(h/XRAll data, metadata, and resource map objects in DataONE have a unique identifier. h0hWh1h2h3U list_itemq`h5}qa(h7]h8]h9]h:]h<]uh>Nh?hh*]qbhL)qc}qd(h/XQAll data, metadata, and resource map objects in DataONE have a unique identifier.qeh0h^h1h2h3hPh5}qf(h7]h8]h9]h:]h<]uh>Kh*]qghHXQAll data, metadata, and resource map objects in DataONE have a unique identifier.qhqi}qj(h/heh0hcubaubaubh])qk}ql(h/XuPIDs will always refer to the same set of bytes accessed through the DataONE API methods such as :func:`MNRead.get`. h0hWh1h2h3h`h5}qm(h7]h8]h9]h:]h<]uh>Nh?hh*]qnhL)qo}qp(h/XtPIDs will always refer to the same set of bytes accessed through the DataONE API methods such as :func:`MNRead.get`.h0hkh1h2h3hPh5}qq(h7]h8]h9]h:]h<]uh>K h*]qr(hHXaPIDs will always refer to the same set of bytes accessed through the DataONE API methods such as qsqt}qu(h/XaPIDs will always refer to the same set of bytes accessed through the DataONE API methods such as h0houbcsphinx.addnodes pending_xref qv)qw}qx(h/X:func:`MNRead.get`qyh0hoh1h2h3U pending_xrefqzh5}q{(UreftypeXfuncUrefwarnq|U reftargetq}X MNRead.getU refdomainXpyq~h:]h9]U refexplicith7]h8]h<]UrefdocqX design/PIDsqUpy:classqNU py:moduleqNuh>K h*]qcdocutils.nodes literal q)q}q(h/hyh5}q(h7]h8]q(Uxrefqh~Xpy-funcqeh9]h:]h<]uh0hwh*]qhHX MNRead.get()qq}q(h/Uh0hubah3UliteralqubaubhHX.q}q(h/X.h0houbeubaubh])q}q(h/XhThe location of content identified by a PID is determined by calling the :func:`CNCore.resolve` method. h0hWh1h2h3h`h5}q(h7]h8]h9]h:]h<]uh>Nh?hh*]qhL)q}q(h/XgThe location of content identified by a PID is determined by calling the :func:`CNCore.resolve` method.h0hh1h2h3hPh5}q(h7]h8]h9]h:]h<]uh>K h*]q(hHXIThe location of content identified by a PID is determined by calling the qq}q(h/XIThe location of content identified by a PID is determined by calling the h0hubhv)q}q(h/X:func:`CNCore.resolve`qh0hh1h2h3hzh5}q(UreftypeXfunch|h}XCNCore.resolveU refdomainXpyqh:]h9]U refexplicith7]h8]h<]hhhNhNuh>K h*]qh)q}q(h/hh5}q(h7]h8]q(hhXpy-funcqeh9]h:]h<]uh0hh*]qhHXCNCore.resolve()qq}q(h/Uh0hubah3hubaubhHX method.qq}q(h/X method.h0hubeubaubh])q}q(h/XPIDs are persistent. Once content is registered with DataONE, the identifier for that content will remain in the DataONE system. h0hWh1h2h3h`h5}q(h7]h8]h9]h:]h<]uh>Nh?hh*]qhL)q}q(h/XPIDs are persistent. Once content is registered with DataONE, the identifier for that content will remain in the DataONE system.qh0hh1h2h3hPh5}q(h7]h8]h9]h:]h<]uh>Kh*]qhHXPIDs are persistent. Once content is registered with DataONE, the identifier for that content will remain in the DataONE system.qq}q(h/hh0hubaubaubh])q}q(h/X6PIDs are unique, and can not be reused once assigned. h0hWh1h2h3h`h5}q(h7]h8]h9]h:]h<]uh>Nh?hh*]qhL)q}q(h/X5PIDs are unique, and can not be reused once assigned.qh0hh1h2h3hPh5}q(h7]h8]h9]h:]h<]uh>Kh*]qhHX5PIDs are unique, and can not be reused once assigned.qąq}q(h/hh0hubaubaubh])q}q(h/XPIDs are generally controlled by Member Nodes, however their uniqueness and immutability is enforced primarily by the Coordinating Nodes. h0hWh1h2h3h`h5}q(h7]h8]h9]h:]h<]uh>Nh?hh*]qhL)q}q(h/XPIDs are generally controlled by Member Nodes, however their uniqueness and immutability is enforced primarily by the Coordinating Nodes.qh0hh1h2h3hPh5}q(h7]h8]h9]h:]h<]uh>Kh*]qhHXPIDs are generally controlled by Member Nodes, however their uniqueness and immutability is enforced primarily by the Coordinating Nodes.qЅq}q(h/hh0hubaubaubeubh,)q}q(h/Uh0h-h1h2h3h4h5}q(h7]h8]h9]h:]qh"ah<]qh auh>Kh?hh*]q(hA)q}q(h/X Uniquenessqh0hh1h2h3hEh5}q(h7]h8]h9]h:]h<]uh>Kh?hh*]qhHX Uniquenessqޅq}q(h/hh0hubaubhL)q}q(h/XGeneration of identifiers in DataONE is largely under the control of the Member Nodes (i.e. the data providers), with the requirement that an existing identifier (i.e. one that is already registered in the DataONE system) can not be reused. This rule is enforced for new content by checking the uniqueness of a proposed identifier in the :func:`MNStorage.create` method, and for existing content by ignoring content with identifiers that are already in use. The :func:`CNCore.reserveIdentifier` method may be used to reserve an identifier, so that a client may for example compose a composite object prior to committing the new content to storage on the Member Node. Similarly, Tier 3 and above Member Nodes may support the :func:`MNStorage.generateIdentifier` which will typically delegate to a third party persistent identifier service such as EZID [1]_ to return an identifier guaranteed to be unique within the DataONE system.h0hh1h2h3hPh5}q(h7]h8]h9]h:]h<]uh>Kh?hh*]q(hHXRGeneration of identifiers in DataONE is largely under the control of the Member Nodes (i.e. the data providers), with the requirement that an existing identifier (i.e. one that is already registered in the DataONE system) can not be reused. This rule is enforced for new content by checking the uniqueness of a proposed identifier in the q允q}q(h/XRGeneration of identifiers in DataONE is largely under the control of the Member Nodes (i.e. the data providers), with the requirement that an existing identifier (i.e. one that is already registered in the DataONE system) can not be reused. This rule is enforced for new content by checking the uniqueness of a proposed identifier in the h0hubhv)q}q(h/X:func:`MNStorage.create`qh0hh1h2h3hzh5}q(UreftypeXfunch|h}XMNStorage.createU refdomainXpyqh:]h9]U refexplicith7]h8]h<]hhhNhNuh>Kh*]qh)q}q(h/hh5}q(h7]h8]q(hhXpy-funcqeh9]h:]h<]uh0hh*]qhHXMNStorage.create()qq}q(h/Uh0hubah3hubaubhHXd method, and for existing content by ignoring content with identifiers that are already in use. The qq}q(h/Xd method, and for existing content by ignoring content with identifiers that are already in use. The h0hubhv)q}q(h/X :func:`CNCore.reserveIdentifier`qh0hh1h2h3hzh5}q(UreftypeXfunch|h}XCNCore.reserveIdentifierU refdomainXpyqh:]h9]U refexplicith7]h8]h<]hhhNhNuh>Kh*]qh)r}r(h/hh5}r(h7]h8]r(hhXpy-funcreh9]h:]h<]uh0hh*]rhHXCNCore.reserveIdentifier()rr}r(h/Uh0jubah3hubaubhHX method may be used to reserve an identifier, so that a client may for example compose a composite object prior to committing the new content to storage on the Member Node. Similarly, Tier 3 and above Member Nodes may support the r r }r (h/X method may be used to reserve an identifier, so that a client may for example compose a composite object prior to committing the new content to storage on the Member Node. Similarly, Tier 3 and above Member Nodes may support the h0hubhv)r }r (h/X$:func:`MNStorage.generateIdentifier`rh0hh1h2h3hzh5}r(UreftypeXfunch|h}XMNStorage.generateIdentifierU refdomainXpyrh:]h9]U refexplicith7]h8]h<]hhhNhNuh>Kh*]rh)r}r(h/jh5}r(h7]h8]r(hjXpy-funcreh9]h:]h<]uh0j h*]rhHXMNStorage.generateIdentifier()rr}r(h/Uh0jubah3hubaubhHX[ which will typically delegate to a third party persistent identifier service such as EZID rr}r(h/X[ which will typically delegate to a third party persistent identifier service such as EZID h0hubcdocutils.nodes footnote_reference r)r}r (h/X[1]_Uresolvedr!Kh0hh3Ufootnote_referencer"h5}r#(h:]r$Uid1r%ah9]h7]h8]h<]Urefidr&h!uh*]r'hHX1r(}r)(h/Uh0jubaubhHXK to return an identifier guaranteed to be unique within the DataONE system.r*r+}r,(h/XK to return an identifier guaranteed to be unique within the DataONE system.h0hubeubeubh,)r-}r.(h/Uh0h-h1h2h3h4h5}r/(h7]h8]h9]h:]r0h ah<]r1h auh>K+h?hh*]r2(hA)r3}r4(h/X Authorityr5h0j-h1h2h3hEh5}r6(h7]h8]h9]h:]h<]uh>K+h?hh*]r7hHX Authorityr8r9}r:(h/j5h0j3ubaubhL)r;}r<(h/X{DataONE treats the original identifier (i.e. the first assignment of the identifier to an object that becomes known to DataONE) as the authoritative identifier for an object. Although generally not encouraged, multiple identifiers may refer to a particular object and in such cases, DataONE will attempt to utilize the original identifier for all communications about the object.r=h0j-h1h2h3hPh5}r>(h7]h8]h9]h:]h<]uh>K-h?hh*]r?hHX{DataONE treats the original identifier (i.e. the first assignment of the identifier to an object that becomes known to DataONE) as the authoritative identifier for an object. Although generally not encouraged, multiple identifiers may refer to a particular object and in such cases, DataONE will attempt to utilize the original identifier for all communications about the object.r@rA}rB(h/j=h0j;ubaubeubh,)rC}rD(h/Uh0h-h1h2h3h4h5}rE(h7]h8]h9]h:]rFhah<]rGhauh>K6h?hh*]rH(hA)rI}rJ(h/XOpacityrKh0jCh1h2h3hEh5}rL(h7]h8]h9]h:]h<]uh>K6h?hh*]rMhHXOpacityrNrO}rP(h/jKh0jIubaubhL)rQ}rR(h/XSIdentifiers utilized by Member Nodes can take many different forms from automatically generated sequential or random character strings to strings that conform to schemes such as the LSID [2]_ and DOI [3]_ specifications. DataONE does not directly utilize implied functionality and services that might be available for some of the identifier schemes. This is not to say that mechanisms such as metadata retrieval for LSIDs is not used by any components of the DataONE infrastructure, but rather that the DataONE infrastructure and services have no functional dependency on such external services.h0jCh1h2h3hPh5}rS(h7]h8]h9]h:]h<]uh>K8h?hh*]rT(hHXIdentifiers utilized by Member Nodes can take many different forms from automatically generated sequential or random character strings to strings that conform to schemes such as the LSID rUrV}rW(h/XIdentifiers utilized by Member Nodes can take many different forms from automatically generated sequential or random character strings to strings that conform to schemes such as the LSID h0jQubj)rX}rY(h/X[2]_j!Kh0jQh3j"h5}rZ(h:]r[Uid2r\ah9]h7]h8]h<]j&h#uh*]r]hHX2r^}r_(h/Uh0jXubaubhHX and DOI r`ra}rb(h/X and DOI h0jQubj)rc}rd(h/X[3]_j!Kh0jQh3j"h5}re(h:]rfUid3rgah9]h7]h8]h<]j&huh*]rhhHX3ri}rj(h/Uh0jcubaubhHX specifications. DataONE does not directly utilize implied functionality and services that might be available for some of the identifier schemes. This is not to say that mechanisms such as metadata retrieval for LSIDs is not used by any components of the DataONE infrastructure, but rather that the DataONE infrastructure and services have no functional dependency on such external services.rkrl}rm(h/X specifications. DataONE does not directly utilize implied functionality and services that might be available for some of the identifier schemes. This is not to say that mechanisms such as metadata retrieval for LSIDs is not used by any components of the DataONE infrastructure, but rather that the DataONE infrastructure and services have no functional dependency on such external services.h0jQubeubhL)rn}ro(h/XIdentifiers are treated as opaque strings in the DataONE system, with no meaning inferred from structure or pattern that may be present in identifiers. The rules for identifier construction in DataONE are minimal and intended to ensure practical utility of identifiers. There is a set of characters that can not be used within an identifier string (non-printing and whitespace characters), and the maximum number of characters that such a string may contain (800 characters, #577). Leading and trailing white space is not allowed.rph0jCh1h2h3hPh5}rq(h7]h8]h9]h:]h<]uh>KAh?hh*]rrhHXIdentifiers are treated as opaque strings in the DataONE system, with no meaning inferred from structure or pattern that may be present in identifiers. The rules for identifier construction in DataONE are minimal and intended to ensure practical utility of identifiers. There is a set of characters that can not be used within an identifier string (non-printing and whitespace characters), and the maximum number of characters that such a string may contain (800 characters, #577). Leading and trailing white space is not allowed.rsrt}ru(h/jph0jnubaubeubh,)rv}rw(h/Uh0h-h1h2h3h4h5}rx(h7]h8]h9]h:]ryhah<]rzh auh>KKh?hh*]r{(hA)r|}r}(h/X Immutabilityr~h0jvh1h2h3hEh5}r(h7]h8]h9]h:]h<]uh>KKh?hh*]rhHX Immutabilityrr}r(h/j~h0j|ubaubhL)r}r(h/X7Once assigned and registered in the DataONE infrastructure, an identifier will always refer to the same sequence of bytes. Generation of other representations of objects may be supported by services (e.g. an image may be transformed from TIFF to JPEG), but the identifier will always refer to the original form.rh0jvh1h2h3hPh5}r(h7]h8]h9]h:]h<]uh>KMh?hh*]rhHX7Once assigned and registered in the DataONE infrastructure, an identifier will always refer to the same sequence of bytes. Generation of other representations of objects may be supported by services (e.g. an image may be transformed from TIFF to JPEG), but the identifier will always refer to the original form.rr}r(h/jh0jubaubeubh,)r}r(h/Uh0h-h1h2h3h4h5}r(h7]h8]h9]h:]rh(ah<]rhauh>KTh?hh*]r(hA)r}r(h/X Resolvabilityrh0jh1h2h3hEh5}r(h7]h8]h9]h:]h<]uh>KTh?hh*]rhHX Resolvabilityrr}r(h/jh0jubaubhL)r}r(h/XiA fundamental goal of DataONE is to ensure that any identifier utilized in the system is resolvable, that is, DataONE provides a mechanism that will enable the location of the object to be determined. Resolution is handled by the Coordinating Nodes through the :func:`CNCore.resolve` method, which returns a list of nodes from which the object may be retrieved.h0jh1h2h3hPh5}r(h7]h8]h9]h:]h<]uh>KVh?hh*]r(hHXA fundamental goal of DataONE is to ensure that any identifier utilized in the system is resolvable, that is, DataONE provides a mechanism that will enable the location of the object to be determined. Resolution is handled by the Coordinating Nodes through the rr}r(h/XA fundamental goal of DataONE is to ensure that any identifier utilized in the system is resolvable, that is, DataONE provides a mechanism that will enable the location of the object to be determined. Resolution is handled by the Coordinating Nodes through the h0jubhv)r}r(h/X:func:`CNCore.resolve`rh0jh1h2h3hzh5}r(UreftypeXfunch|h}XCNCore.resolveU refdomainXpyrh:]h9]U refexplicith7]h8]h<]hhhNhNuh>KVh*]rh)r}r(h/jh5}r(h7]h8]r(hjXpy-funcreh9]h:]h<]uh0jh*]rhHXCNCore.resolve()rr}r(h/Uh0jubah3hubaubhHXN method, which returns a list of nodes from which the object may be retrieved.rr}r(h/XN method, which returns a list of nodes from which the object may be retrieved.h0jubeubhL)r}r(h/XA guarantee of identifier resolvability is an important, core function of the DataONE infrastructure upon which many other services may be constructed, both within DataONE and by third party systems.rh0jh1h2h3hPh5}r(h7]h8]h9]h:]h<]uh>K\h?hh*]rhHXA guarantee of identifier resolvability is an important, core function of the DataONE infrastructure upon which many other services may be constructed, both within DataONE and by third party systems.rr}r(h/jh0jubaubeubh,)r}r(h/Uh0h-h1h2h3h4h5}r(h7]h8]h9]h:]rh&ah<]rh auh>Kbh?hh*]r(hA)r}r(h/X Granularityrh0jh1h2h3hEh5}r(h7]h8]h9]h:]h<]uh>Kbh?hh*]rhHX Granularityrr}r(h/jh0jubaubhL)r}r(h/XNIdentifiers refer to managed objects in DataONE. Initially data, science metadata documents, and resource maps have identifiers. The definition of "data" is somewhat arbitrary though, and a single data object may be a single record within some larger collection, or may refer to an entire set of records contained within some package.rh0jh1h2h3hPh5}r(h7]h8]h9]h:]h<]uh>Kdh?hh*]rhHXNIdentifiers refer to managed objects in DataONE. Initially data, science metadata documents, and resource maps have identifiers. The definition of "data" is somewhat arbitrary though, and a single data object may be a single record within some larger collection, or may refer to an entire set of records contained within some package.rr}r(h/jh0jubaubeubh,)r}r(h/Uh0h-h1h2h3h4h5}r(h7]h8]h9]h:]rh)ah<]rhauh>Klh?hh*]r(hA)r}r(h/X Structurerh0jh1h2h3hEh5}r(h7]h8]h9]h:]h<]uh>Klh?hh*]rhHX Structurerr}r(h/jh0jubaubhL)r}r(h/X!The characters that may appear in an identifier string acceptable to the DataONE system is constrained by the XMLSchema definition (:class:`Types.Identifier`), which is essentially a string of length greater than zero but less than 800 characters with no whitespace (spaces, tabs, non-printing characters, carriage returns, new lines). Identifiers may be Unicode provided they conform to the fairly liberal restrictions imposed by the XML specification [4]_. Examples of valid identifiers in DataONE are shown in the section *Serializing* below.h0jh1h2h3hPh5}r(h7]h8]h9]h:]h<]uh>Knh?hh*]r(hHXThe characters that may appear in an identifier string acceptable to the DataONE system is constrained by the XMLSchema definition (rr}r(h/XThe characters that may appear in an identifier string acceptable to the DataONE system is constrained by the XMLSchema definition (h0jubhv)r}r(h/X:class:`Types.Identifier`rh0jh1h2h3hzh5}r(UreftypeXclassh|h}XTypes.IdentifierU refdomainXpyrh:]h9]U refexplicith7]h8]h<]hhhNhNuh>Knh*]rh)r}r(h/jh5}r(h7]h8]r(hjXpy-classreh9]h:]h<]uh0jh*]rhHXTypes.Identifierrr}r(h/Uh0jubah3hubaubhHX(), which is essentially a string of length greater than zero but less than 800 characters with no whitespace (spaces, tabs, non-printing characters, carriage returns, new lines). Identifiers may be Unicode provided they conform to the fairly liberal restrictions imposed by the XML specification rr}r(h/X(), which is essentially a string of length greater than zero but less than 800 characters with no whitespace (spaces, tabs, non-printing characters, carriage returns, new lines). Identifiers may be Unicode provided they conform to the fairly liberal restrictions imposed by the XML specification h0jubj)r}r(h/X[4]_j!Kh0jh3j"h5}r(h:]rUid4rah9]h7]h8]h<]j&h%uh*]rhHX4r}r(h/Uh0jubaubhHXD. Examples of valid identifiers in DataONE are shown in the section rr}r(h/XD. Examples of valid identifiers in DataONE are shown in the section h0jubcdocutils.nodes emphasis r)r}r(h/X *Serializing*h5}r(h7]h8]h9]h:]h<]uh0jh*]rhHX Serializingrr }r (h/Uh0jubah3Uemphasisr ubhHX below.r r }r(h/X below.h0jubeubeubh,)r}r(h/Uh0h-h1h2h3h4h5}r(h7]h8]h9]h:]rhah<]rhauh>Kyh?hh*]r(hA)r}r(h/X Serializingrh0jh1h2h3hEh5}r(h7]h8]h9]h:]h<]uh>Kyh?hh*]rhHX Serializingrr}r(h/jh0jubaubhL)r}r(h/XTWhen identifiers appear in text, the full identifier should be presented unmodified.rh0jh1h2h3hPh5}r (h7]h8]h9]h:]h<]uh>K{h?hh*]r!hHXTWhen identifiers appear in text, the full identifier should be presented unmodified.r"r#}r$(h/jh0jubaubhL)r%}r&(h/XIdentifiers appearing in URLs or other representations that have reserved characters should be escaped according to the rules of the targeted serialization format. For example, the identifiers::h0jh1h2h3hPh5}r'(h7]h8]h9]h:]h<]uh>K~h?hh*]r(hHXIdentifiers appearing in URLs or other representations that have reserved characters should be escaped according to the rules of the targeted serialization format. For example, the identifiers:r)r*}r+(h/XIdentifiers appearing in URLs or other representations that have reserved characters should be escaped according to the rules of the targeted serialization format. For example, the identifiers:h0j%ubaubcdocutils.nodes literal_block r,)r-}r.(h/X10.1000/182 urn:lsid:ubio.org:namebank:11815 http://example.com/data/mydata?row=24 ldap://ldap1.example.net:6666/o=University%20of%20Michigan,c=US??sub?(cn=Babs%20Jensen) ฉันกินกระจกได้ Is_féidir_liom_ithe_gloineh0jh1h2h3U literal_blockr/h5}r0(U xml:spacer1Upreserver2h:]h9]h7]h8]h<]uh>Kh?hh*]r3hHX10.1000/182 urn:lsid:ubio.org:namebank:11815 http://example.com/data/mydata?row=24 ldap://ldap1.example.net:6666/o=University%20of%20Michigan,c=US??sub?(cn=Babs%20Jensen) ฉันกินกระจกได้ Is_féidir_liom_ithe_gloiner4r5}r6(h/Uh0j-ubaubhL)r7}r8(h/Xwould be serialized in DataONE :func:`MNRead.get` URLs (or any other URL path) according to RFC3986_ encoding guidelines for URI path segments::h0jh1h2h3hPh5}r9(h7]h8]h9]h:]h<]uh>Kh?hh*]r:(hHXwould be serialized in DataONE r;r<}r=(h/Xwould be serialized in DataONE h0j7ubhv)r>}r?(h/X:func:`MNRead.get`r@h0j7h1h2h3hzh5}rA(UreftypeXfunch|h}X MNRead.getU refdomainXpyrBh:]h9]U refexplicith7]h8]h<]hhhNhNuh>Kh*]rCh)rD}rE(h/j@h5}rF(h7]h8]rG(hjBXpy-funcrHeh9]h:]h<]uh0j>h*]rIhHX MNRead.get()rJrK}rL(h/Uh0jDubah3hubaubhHX+ URLs (or any other URL path) according to rMrN}rO(h/X+ URLs (or any other URL path) according to h0j7ubcdocutils.nodes problematic rP)rQ}rR(h/XRFC3986_rSh0j7h1Nh3U problematicrTh5}rU(h:]rVUid13rWah9]h7]h8]h<]UrefidUid12rXuh>Nh?hh*]rYhHXRFC3986_rZr[}r\(h/Uh0jQubaubhHX+ encoding guidelines for URI path segments:r]r^}r_(h/X+ encoding guidelines for URI path segments:h0j7ubeubj,)r`}ra(h/X'http://mn.example.com/mn/object/10.1000%2F182 http://mn.example.com/mn/object/urn:lsid:ubio.org:namebank:11815 http://mn.example.com/mn/object/http:%2F%2Fexample.com%2Fdata%2Fmydata%3Frow=24 http://mn.example.com/mn/object/ldap:%2F%2Fldap1.example.net:6666%2Fo=University%2520of%2520Michigan,c=US%3F%3Fsub%3F(cn=Babs%2520Jensen) http://mn.example.com/mn/object/%E0%B8%89%E0%B8%B1%E0%B8%99%E0%B8%81%E0%B8%B4%E0%B8%99%E0%B8%81%E0%B8%A3%E0%B8%B0%E0%B8%88%E0%B8%81%E0%B9%84%E0%B8%94%E0%B9%89 http://mn.example.com/mn/object/Is_f%C3%A9idir_liom_ithe_gloineh0jh1h2h3j/h5}rb(j1j2h:]h9]h7]h8]h<]uh>Kh?hh*]rchHX'http://mn.example.com/mn/object/10.1000%2F182 http://mn.example.com/mn/object/urn:lsid:ubio.org:namebank:11815 http://mn.example.com/mn/object/http:%2F%2Fexample.com%2Fdata%2Fmydata%3Frow=24 http://mn.example.com/mn/object/ldap:%2F%2Fldap1.example.net:6666%2Fo=University%2520of%2520Michigan,c=US%3F%3Fsub%3F(cn=Babs%2520Jensen) http://mn.example.com/mn/object/%E0%B8%89%E0%B8%B1%E0%B8%99%E0%B8%81%E0%B8%B4%E0%B8%99%E0%B8%81%E0%B8%A3%E0%B8%B0%E0%B8%88%E0%B8%81%E0%B9%84%E0%B8%94%E0%B9%89 http://mn.example.com/mn/object/Is_f%C3%A9idir_liom_ithe_gloinerdre}rf(h/Uh0j`ubaubcdocutils.nodes note rg)rh}ri(h/XThe "+" (plus) character is a special case since it was once treated as a space character in URLs, and was changed in RFC3986 [5]_ such that the "+" would not be treated as a space. To minimize confusion when the plus character appears in an identifier, DataONE recommends that the character is percent escaped (``%2B``) when it appears in DataONE service URLs. All DataONE libraries and services operate in this manner.h0jh1h2h3Unoterjh5}rk(h7]h8]h9]h:]h<]uh>Nh?hh*]rlhL)rm}rn(h/XThe "+" (plus) character is a special case since it was once treated as a space character in URLs, and was changed in RFC3986 [5]_ such that the "+" would not be treated as a space. To minimize confusion when the plus character appears in an identifier, DataONE recommends that the character is percent escaped (``%2B``) when it appears in DataONE service URLs. All DataONE libraries and services operate in this manner.h0jhh1h2h3hPh5}ro(h7]h8]h9]h:]h<]uh>Kh*]rp(hHX~The "+" (plus) character is a special case since it was once treated as a space character in URLs, and was changed in RFC3986 rqrr}rs(h/X~The "+" (plus) character is a special case since it was once treated as a space character in URLs, and was changed in RFC3986 h0jmubj)rt}ru(h/X[5]_j!Kh0jmh3j"h5}rv(h:]rwUid5rxah9]h7]h8]h<]j&h$uh*]ryhHX5rz}r{(h/Uh0jtubaubhHX such that the "+" would not be treated as a space. To minimize confusion when the plus character appears in an identifier, DataONE recommends that the character is percent escaped (r|r}}r~(h/X such that the "+" would not be treated as a space. To minimize confusion when the plus character appears in an identifier, DataONE recommends that the character is percent escaped (h0jmubh)r}r(h/X``%2B``h5}r(h7]h8]h9]h:]h<]uh0jmh*]rhHX%2Brr}r(h/Uh0jubah3hubhHXe) when it appears in DataONE service URLs. All DataONE libraries and services operate in this manner.rr}r(h/Xe) when it appears in DataONE service URLs. All DataONE libraries and services operate in this manner.h0jmubeubaubhL)r}r(h/XuThe necessary encoding of URLs can be usually achieved through standard libraries available in many languages, with the caveat that the encoding follows the RFC3986 encoding rules. Many packages over-escape, keeping only the unreserved character set unescaped. For its client libraries, DataONE is taking a minimal escaping approach within the latitude RFC3986 allows. Specifically, using [pchar] - ['+'] as the set of unescaped characters for identifiers in path segments, and [pchar] - ['+', '&', '='] + ['/', '?'] for identifiers in query segments, (segments in both cases meaning characters between delimiters). For example::h0jh1h2h3hPh5}r(h7]h8]h9]h:]h<]uh>Kh?hh*]rhHXtThe necessary encoding of URLs can be usually achieved through standard libraries available in many languages, with the caveat that the encoding follows the RFC3986 encoding rules. Many packages over-escape, keeping only the unreserved character set unescaped. For its client libraries, DataONE is taking a minimal escaping approach within the latitude RFC3986 allows. Specifically, using [pchar] - ['+'] as the set of unescaped characters for identifiers in path segments, and [pchar] - ['+', '&', '='] + ['/', '?'] for identifiers in query segments, (segments in both cases meaning characters between delimiters). For example:rr}r(h/XtThe necessary encoding of URLs can be usually achieved through standard libraries available in many languages, with the caveat that the encoding follows the RFC3986 encoding rules. Many packages over-escape, keeping only the unreserved character set unescaped. For its client libraries, DataONE is taking a minimal escaping approach within the latitude RFC3986 allows. Specifically, using [pchar] - ['+'] as the set of unescaped characters for identifiers in path segments, and [pchar] - ['+', '&', '='] + ['/', '?'] for identifiers in query segments, (segments in both cases meaning characters between delimiters). For example:h0jubaubj,)r}r(h/XQexample-location-dependent-__/__?__&__=__ example-common-unescaped-;:@$-_.!*()',~h0jh1h2h3j/h5}r(j1j2h:]h9]h7]h8]h<]uh>Kh?hh*]rhHXQexample-location-dependent-__/__?__&__=__ example-common-unescaped-;:@$-_.!*()',~rr}r(h/Uh0jubaubhL)r}r(h/Xwill be encoded in paths to::rh0jh1h2h3hPh5}r(h7]h8]h9]h:]h<]uh>Kh?hh*]rhHXwill be encoded in paths to:rr}r(h/Xwill be encoded in paths to:h0jubaubj,)r}r(h/XUexample-location-dependent-__%2F__%3F__&__=__ example-common-unescaped-;:@$-_.!*()',~h0jh1h2h3j/h5}r(j1j2h:]h9]h7]h8]h<]uh>Kh?hh*]rhHXUexample-location-dependent-__%2F__%3F__&__=__ example-common-unescaped-;:@$-_.!*()',~rr}r(h/Uh0jubaubhL)r}r(h/X%and encoded in the query section to::rh0jh1h2h3hPh5}r(h7]h8]h9]h:]h<]uh>Kh?hh*]rhHX$and encoded in the query section to:rr}r(h/X$and encoded in the query section to:h0jubaubj,)r}r(h/XUexample-location-dependent-__/__?__%26__%3D__ example-common-unescaped-;:@$-_.!*()',~h0jh1h2h3j/h5}r(j1j2h:]h9]h7]h8]h<]uh>Kh?hh*]rhHXUexample-location-dependent-__/__?__%26__%3D__ example-common-unescaped-;:@$-_.!*()',~rr}r(h/Uh0jubaubhL)r}r(h/X|Note that RFC3986 [5]_ treats the query section of the URI as a blackbox, so '&' and '=' are unescaped (to be used as sub-delimiters). For the purpose of encoding content, we take the approach of encoding at the segment level, so need to escape those characters. For those implementations using standard encoding routines, it is important to know that package's treatment of this.h0jh1h2h3hPh5}r(h7]h8]h9]h:]h<]uh>Kh?hh*]r(hHXNote that RFC3986 rr}r(h/XNote that RFC3986 h0jubj)r}r(h/X[5]_j!Kh0jh3j"h5}r(h:]rUid6rah9]h7]h8]h<]j&h$uh*]rhHX5r}r(h/Uh0jubaubhHXf treats the query section of the URI as a blackbox, so '&' and '=' are unescaped (to be used as sub-delimiters). For the purpose of encoding content, we take the approach of encoding at the segment level, so need to escape those characters. For those implementations using standard encoding routines, it is important to know that package's treatment of this.rr}r(h/Xf treats the query section of the URI as a blackbox, so '&' and '=' are unescaped (to be used as sub-delimiters). For the purpose of encoding content, we take the approach of encoding at the segment level, so need to escape those characters. For those implementations using standard encoding routines, it is important to know that package's treatment of this.h0jubeubhL)r}r(h/X#The following examples in Python and Java illustrate percent encoding of data such as an identifier appropriate for appending to a URL. Each processes utf-8 encoded input through *stdin* and outputs percent encoded or decoded responses. In java pseudo-code the general process is as follows.h0jh1h2h3hPh5}r(h7]h8]h9]h:]h<]uh>Kh?hh*]r(hHXThe following examples in Python and Java illustrate percent encoding of data such as an identifier appropriate for appending to a URL. Each processes utf-8 encoded input through rr}r(h/XThe following examples in Python and Java illustrate percent encoding of data such as an identifier appropriate for appending to a URL. Each processes utf-8 encoded input through h0jubj)r}r(h/X*stdin*h5}r(h7]h8]h9]h:]h<]uh0jh*]rhHXstdinrr}r(h/Uh0jubah3j ubhHXi and outputs percent encoded or decoded responses. In java pseudo-code the general process is as follows.rr}r(h/Xi and outputs percent encoded or decoded responses. In java pseudo-code the general process is as follows.h0jubeubj,)r}r(h/X// pseudo-code: this will not compile! CharacterSet PATH_SAFE = RFC3986_PCHAR and not ['+']; CharacterSet QUERY_SAFE = PATH_SAFE and not ['&','='] or ['?','/']; String encodeUtf8_pathSegment(identifier) { String utf8ID = identifier.translate("UTF-8"); return encodedID = percentEscape(utf8ID,PATH_SAFE); } String encodeUtf8_querySegment(identifier) { String utf8ID = identifier.translate("UTF-8"); return encodedID = percentEscape(utf8ID,QUERY_SAFE); } String decodeString(string) { // older clients may encode spaces with '+' // so if we see them in the input, it is due to that // and we need to decode them, too. String correctedString = string.replace("+","%2B"); return decodePercentEscaped(correctedString); }h0jh1h2h3j/h5}r(UlinenosrUlanguagerXjavaj1j2h:]h9]h7]Uhighlight_argsr}h8]h<]uh>Kh?hh*]rhHX// pseudo-code: this will not compile! CharacterSet PATH_SAFE = RFC3986_PCHAR and not ['+']; CharacterSet QUERY_SAFE = PATH_SAFE and not ['&','='] or ['?','/']; String encodeUtf8_pathSegment(identifier) { String utf8ID = identifier.translate("UTF-8"); return encodedID = percentEscape(utf8ID,PATH_SAFE); } String encodeUtf8_querySegment(identifier) { String utf8ID = identifier.translate("UTF-8"); return encodedID = percentEscape(utf8ID,QUERY_SAFE); } String decodeString(string) { // older clients may encode spaces with '+' // so if we see them in the input, it is due to that // and we need to decode them, too. String correctedString = string.replace("+","%2B"); return decodePercentEscaped(correctedString); }rr}r(h/Uh0jubaubj,)r}r(h/Ximport sys import codecs import urllib def pctEncode(data): '''Encode the unicode string data as utf-8 then percent encode that ready for appending as a path element to a URL. ''' response = urllib.quote(data.encode("utf-8"), safe=":") return response def pctDecode(data): '''Decode a percent encoded string and return the unicode object. but first handle any mistaken '+' in the data string ''' data = data.replace("+","%2B") response = urllib.unquote(data) return response if __name__ == "__main__": ''' Read utf-8 encoded input from stdin and percent encode or decode (with command line argument -d). e.g. given test_ids.txt, a UTF-8 encoded file with identifiers appearing one per line: cat test_ids.txt | python PctEncode.py | python PctEncode.py -d should output equivalent to: cat test_ids.txt ''' doEncode = True try: if sys.argv[1] == "-d": doEncode = False except: pass id = unicode(sys.stdin.readline(), "utf-8").strip() while len(id) > 0: if doEncode: print pctEncode(id) else: print pctDecode(id) id = unicode(sys.stdin.readline(), "utf-8").strip()h0jh1h2h3j/h5}r(jjXpythonj1j2h:]h9]h7]j}h8]h<]uh>Kh?hh*]rhHXimport sys import codecs import urllib def pctEncode(data): '''Encode the unicode string data as utf-8 then percent encode that ready for appending as a path element to a URL. ''' response = urllib.quote(data.encode("utf-8"), safe=":") return response def pctDecode(data): '''Decode a percent encoded string and return the unicode object. but first handle any mistaken '+' in the data string ''' data = data.replace("+","%2B") response = urllib.unquote(data) return response if __name__ == "__main__": ''' Read utf-8 encoded input from stdin and percent encode or decode (with command line argument -d). e.g. given test_ids.txt, a UTF-8 encoded file with identifiers appearing one per line: cat test_ids.txt | python PctEncode.py | python PctEncode.py -d should output equivalent to: cat test_ids.txt ''' doEncode = True try: if sys.argv[1] == "-d": doEncode = False except: pass id = unicode(sys.stdin.readline(), "utf-8").strip() while len(id) > 0: if doEncode: print pctEncode(id) else: print pctDecode(id) id = unicode(sys.stdin.readline(), "utf-8").strip()rr}r(h/Uh0jubaubj,)r}r(h/X7 import java.io.*; import java.net.*; class PctEncode { /** Simple example of URL path encoding of UTF-8 strings for including as path elements in URLs as per RFC3986. e.g. given test_ids.txt, a UTF-8 encoded file with identifiers appearing one per line: cat test_ids.txt | java PctEncode | java PctEncode -d should output equivalent to: cat test_ids.txt */ public static String pctDecode(String data) { /** Decode a percent encoded string, returning a Java Unicode string */ String response = null; try { data = data.replace("+","%2B"); response = URLDecoder.decode( data, "UTF-8"); } catch (java.io.UnsupportedEncodingException e) { System.out.println("Error pctDecode : " + e.getMessage()); } return response; } public static String pctEncodePathSegment(String data) { /** Encode a Java string according to the path encoding rules in RFC3986. Note that this does not encode properly for data that is to be the root of the path, it is assumed that the data will be appended to the end of a a URL path. */ String response = null; try { response = URLEncoder.encode( data, "UTF-8" ); // fix outdated space-to-+ convention response = response.replace("+","%20"); // now un-escape for minimally escaped result response = response.replace("%3A",":").replace("%28","("); response = response.replace("%3B",";").replace("%29",")"); response = response.replace("%40","@").replace("%27","'"); response = response.replace("%24","$").replace("%2C",","); response = response.replace("%21","!").replace("%7E","~"); } catch (java.io.UnsupportedEncodingException e) { System.out.println("Error pctEncode: " + e.getMessage()); } return response; } public static void main( String[] args ) { try { boolean doEncode = true; try { if (args[0].equals( "-d" )) doEncode = false; } catch(ArrayIndexOutOfBoundsException e) { } PrintStream outs = new PrintStream( System.out, true, "UTF-8" ); InputStreamReader isr = new InputStreamReader( System.in, "UTF-8" ); BufferedReader reader = new BufferedReader( isr ); String id = null; String data = null; while ( (id = reader.readLine()) != null ) { if (doEncode) { data = pctEncode( id ); } else { data = pctDecode( id ); } outs.println( data ); } } catch(java.io.IOException e) { System.out.println("Error main: " + e.getMessage()); } } }h0jh1h2h3j/h5}r(jjXjavaj1j2h:]h9]h7]j}h8]h<]uh>M h?hh*]rhHX7 import java.io.*; import java.net.*; class PctEncode { /** Simple example of URL path encoding of UTF-8 strings for including as path elements in URLs as per RFC3986. e.g. given test_ids.txt, a UTF-8 encoded file with identifiers appearing one per line: cat test_ids.txt | java PctEncode | java PctEncode -d should output equivalent to: cat test_ids.txt */ public static String pctDecode(String data) { /** Decode a percent encoded string, returning a Java Unicode string */ String response = null; try { data = data.replace("+","%2B"); response = URLDecoder.decode( data, "UTF-8"); } catch (java.io.UnsupportedEncodingException e) { System.out.println("Error pctDecode : " + e.getMessage()); } return response; } public static String pctEncodePathSegment(String data) { /** Encode a Java string according to the path encoding rules in RFC3986. Note that this does not encode properly for data that is to be the root of the path, it is assumed that the data will be appended to the end of a a URL path. */ String response = null; try { response = URLEncoder.encode( data, "UTF-8" ); // fix outdated space-to-+ convention response = response.replace("+","%20"); // now un-escape for minimally escaped result response = response.replace("%3A",":").replace("%28","("); response = response.replace("%3B",";").replace("%29",")"); response = response.replace("%40","@").replace("%27","'"); response = response.replace("%24","$").replace("%2C",","); response = response.replace("%21","!").replace("%7E","~"); } catch (java.io.UnsupportedEncodingException e) { System.out.println("Error pctEncode: " + e.getMessage()); } return response; } public static void main( String[] args ) { try { boolean doEncode = true; try { if (args[0].equals( "-d" )) doEncode = false; } catch(ArrayIndexOutOfBoundsException e) { } PrintStream outs = new PrintStream( System.out, true, "UTF-8" ); InputStreamReader isr = new InputStreamReader( System.in, "UTF-8" ); BufferedReader reader = new BufferedReader( isr ); String id = null; String data = null; while ( (id = reader.readLine()) != null ) { if (doEncode) { data = pctEncode( id ); } else { data = pctDecode( id ); } outs.println( data ); } } catch(java.io.IOException e) { System.out.println("Error main: " + e.getMessage()); } } }rr}r(h/Uh0jubaubhL)r}r(h/XHGiven this code and a utf-8 encoded source file *test_ids.txt* such as::rh0jh1h2h3hPh5}r(h7]h8]h9]h:]h<]uh>Mfh?hh*]r(hHX0Given this code and a utf-8 encoded source file rr}r(h/X0Given this code and a utf-8 encoded source file h0jubj)r}r(h/X*test_ids.txt*h5}r(h7]h8]h9]h:]h<]uh0jh*]rhHX test_ids.txtrr}r(h/Uh0jubah3j ubhHX such as:rr}r(h/X such as:h0jubeubj,)r}r(h/Xö 10.1000/182 urn:lsid:ubio.org:namebank:11815 http://example.com/data/mydata?row=24 ldap://ldap1.example.net:6666/o=University%20of%20Michigan,%20c=US??sub?(cn=Babs%20Jensen)", ฉันกินกระจกได้ Is_féidir_liom_ithe_gloineh0jh1h2h3j/h5}r(j1j2h:]h9]h7]h8]h<]uh>Mhh?hh*]rhHXö 10.1000/182 urn:lsid:ubio.org:namebank:11815 http://example.com/data/mydata?row=24 ldap://ldap1.example.net:6666/o=University%20of%20Michigan,%20c=US??sub?(cn=Babs%20Jensen)", ฉันกินกระจกได้ Is_féidir_liom_ithe_gloinerr}r(h/Uh0jubaubhL)r }r (h/XGThe following commands should output the same as ``cat test_ids.txt``::r h0jh1h2h3hPh5}r (h7]h8]h9]h:]h<]uh>Mph?hh*]r (hHX1The following commands should output the same as rr}r(h/X1The following commands should output the same as h0j ubh)r}r(h/X``cat test_ids.txt``h5}r(h7]h8]h9]h:]h<]uh0j h*]rhHXcat test_ids.txtrr}r(h/Uh0jubah3hubhHX:r}r(h/X:h0j ubeubj,)r}r(h/Xucat test_ids.txt | java PctEncode | python PctEncode.py -d cat test_ids.txt | python PctEncode.py | java PctEncode -dh0jh1h2h3j/h5}r(j1j2h:]h9]h7]h8]h<]uh>Mrh?hh*]rhHXucat test_ids.txt | java PctEncode | python PctEncode.py -d cat test_ids.txt | python PctEncode.py | java PctEncode -drr}r (h/Uh0jubaubcdocutils.nodes target r!)r"}r#(h/XK.. _guid: http://en.wikipedia.org/wiki/Globally_unique_identifier#Algorithmh0jh1h2h3Utargetr$h5}r%(Urefurir&XAhttp://en.wikipedia.org/wiki/Globally_unique_identifier#Algorithmh:]r'hah9]h7]h8]h<]r(hauh>Mvh?hh*]ubj!)r)}r*(h/X9.. _OGC WKT: http://en.wikipedia.org/wiki/Well-known_texth0jh1h2h3j$h5}r+(j&X,http://en.wikipedia.org/wiki/Well-known_texth:]r,hah9]h7]h8]h<]r-h auh>Myh?hh*]ubcdocutils.nodes footnote r.)r/}r0(h/Xhttp://n2t.net/ezid/ j!Kh0jh1h2h3Ufootnoter1h5}r2(h7]h8]h9]r3j%ah:]r4h!ah<]r5X1auh>M{h?hh*]r6(cdocutils.nodes label r7)r8}r9(h/X1h5}r:(h7]h8]h9]h:]h<]uh0j/h*]r;hHX1r<}r=(h/Uh0j8ubah3Ulabelr>ubhL)r?}r@(h/Xhttp://n2t.net/ezid/rAh0j/h1h2h3hPh5}rB(h7]h8]h9]h:]h<]uh>M{h*]rCcdocutils.nodes reference rD)rE}rF(h/jAh5}rG(UrefurijAh:]h9]h7]h8]h<]uh0j?h*]rHhHXhttp://n2t.net/ezid/rIrJ}rK(h/Uh0jEubah3U referencerLubaubeubj.)rM}rN(h/Xhttp://lsids.sourceforge.net/ j!Kh0jh1h2h3j1h5}rO(h7]h8]h9]rPj\ah:]rQh#ah<]rRX2auh>M}h?hh*]rS(j7)rT}rU(h/X2h5}rV(h7]h8]h9]h:]h<]uh0jMh*]rWhHX2rX}rY(h/Uh0jTubah3j>ubhL)rZ}r[(h/Xhttp://lsids.sourceforge.net/r\h0jMh1h2h3hPh5}r](h7]h8]h9]h:]h<]uh>M}h*]r^jD)r_}r`(h/j\h5}ra(Urefurij\h:]h9]h7]h8]h<]uh0jZh*]rbhHXhttp://lsids.sourceforge.net/rcrd}re(h/Uh0j_ubah3jLubaubeubj.)rf}rg(h/Xhttp://www.doi.org/ j!Kh0jh1h2h3j1h5}rh(h7]h8]h9]rijgah:]rjhah<]rkX3auh>Mh?hh*]rl(j7)rm}rn(h/X3h5}ro(h7]h8]h9]h:]h<]uh0jfh*]rphHX3rq}rr(h/Uh0jmubah3j>ubhL)rs}rt(h/Xhttp://www.doi.org/ruh0jfh1h2h3hPh5}rv(h7]h8]h9]h:]h<]uh>Mh*]rwjD)rx}ry(h/juh5}rz(Urefurijuh:]h9]h7]h8]h<]uh0jsh*]r{hHXhttp://www.doi.org/r|r}}r~(h/Uh0jxubah3jLubaubeubj.)r}r(h/X%http://www.w3.org/TR/xml11/#charsets j!Kh0jh1h2h3j1h5}r(h7]h8]h9]rjah:]rh%ah<]rX4auh>Mh?hh*]r(j7)r}r(h/X4h5}r(h7]h8]h9]h:]h<]uh0jh*]rhHX4r}r(h/Uh0jubah3j>ubhL)r}r(h/X$http://www.w3.org/TR/xml11/#charsetsrh0jh1h2h3hPh5}r(h7]h8]h9]h:]h<]uh>Mh*]rjD)r}r(h/jh5}r(Urefurijh:]h9]h7]h8]h<]uh0jh*]rhHX$http://www.w3.org/TR/xml11/#charsetsrr}r(h/Uh0jubah3jLubaubeubj.)r}r(h/X$http://tools.ietf.org/html/rfc3986 j!Kh0jh1h2h3j1h5}r(h7]h8]h9]r(jxjeh:]rh$ah<]rX5auh>Mh?hh*]r(j7)r}r(h/X5h5}r(h7]h8]h9]h:]h<]uh0jh*]rhHX5r}r(h/Uh0jubah3j>ubhL)r}r(h/X"http://tools.ietf.org/html/rfc3986rh0jh1h2h3hPh5}r(h7]h8]h9]h:]h<]uh>Mh*]rjD)r}r(h/jh5}r(Urefurijh:]h9]h7]h8]h<]uh0jh*]rhHX"http://tools.ietf.org/html/rfc3986rr}r(h/Uh0jubah3jLubaubeubcdocutils.nodes comment r)r}r(h/Xx7OLD Notes follow, preserved here for now but likely to be removed Suggested Strategy ------------------ 1. DataONE supports all identifier schemes where the PID can be represented as a Unicode string (this should be any identifier). 2. The original identifier first assigned by a Member Node is the identifier promoted as the authoritative identifier for that content. Other identifiers that may be assigned by MNs that don't support the original scheme will be mapped to the original. 3. If the original MN discontinues participation in DataONE, then the identifier originally used remains as the authoritative identifier. 4. Any identifiers in use by the DataONE system can be resolved at any node (CN or MN). A caching system (e.g. memcached) should be used to improve resolution performance (can be primed with existing IDs). This strategy will enable the use of any identifier that is represented by a string, and will persist the original identifier for the object regardless of what happens to the originating Member Node. An obvious concern with this strategy is that a single object may have multiple identifiers associated with it. Since the original identifier is persisted, however, it will be the primary identifier by which that content will be referenced, regardless of which node the object is located on. .. @startuml images/resolve.png title Resolve PID actor User participant "CRUD API" as m_crud << Member Node >> participant "Cache" as m_cache << Member Node >> participant "CRUD API" as cn_crud << Coordinating Node >> participant "Directory" as cn_dir << Coordinating Node >> User -> m_crud: resolve(token, "A5548D") m_crud -> m_cache: cache_lookup("A5548D") m_cache --> m_crud: FAIL m_crud -> cn_crud: resolve(token, "A5548D") cn_crud -> cn_dir: lookup("A5548D") cn_dir --> cn_crud: metadata cn_crud --> m_crud: metadata m_crud --> m_cache: addEntry("A5548D", metadata) m_crud --> User: metadata @enduml .. image:: images/resolve.png *Figure 1.* Resolving a PID. In this scenario a user is trying to determine what the ID "A5548D" refers to, and uses the resolution service of a Member Node to that effect. .. @startuml images/resolve-detail.png title Resolve PID Detail actor User participant "CRUD API" as m_crud << Member Node >> participant "Cache" as m_cache << Member Node >> participant "CRUD API" as cn_crud << Coordinating Node >> participant "Directory" as cn_dir << Coordinating Node >> participant "CRUD API" as m_crud2 << Member Node 2 >> User -> m_crud: get(token, "A5548D") m_crud -> m_cache: lookup("A5548D") note right Local resolve failed, defer to CN endnote m_cache --> m_crud: FAIL m_crud -> cn_crud: resolve(token, "A5548D") cn_crud -> cn_dir: lookup("A5548D") cn_dir --> cn_crud: metadata cn_crud --> m_crud: metadata m_crud --> m_cache: addEntry(GUID, metadata) m_crud -> m_crud: parseMetadata(metadata) note right Found data URL = http://mn2.dataone.org/objects/A4448D endnote m_crud --> User: HTTP 302: http://mn2.dataone.org/objects/A4448D note right Return a redirect to the MN 2 get object interface for the specified object. endnote User -> m_crud2: GET "http://mn2.dataone.org/objects/A4448D" m_crud2 --> User: bytes @enduml .. image:: images/resolve-detail.png *Figure 2.* Detail for object retrieval of an object identified by a PID. In this case, the User is requesting a data object from MN 1, though the data is actually located on MN 2. .. @startuml images/resolve-conflict.png title Conflicting IDs participant "MN_A" as mn_a participant "MN_B" as mn_b participant "CN" as cn participant "CN OStore" as cn_os mn_a -> cn: registerID("435") cn -> cn_os: store("mn_a:435") cn_os <-- cn: ACK mn_a <-- cn: ACK mn_b -> cn: registerID("435") cn -> cn_os: store("mn_b:435") cn_os <-- cn: ACK mn_b <-- cn: ACK actor user user -> cn: resolve("435") user <-- cn: "mn_a:435", "mn_b:435" @enduml .. image:: images/resolve-conflict.png *Figure 3.* A scenario where two MNs happen to add different content to the system with the same identifier. Resolving the identifier without including the namespace results in two matches that must be interpreted by the client. The likelihood of such a scenario should be low, given that MNs should be utilizing identifier schemes that under ideal circumstances should not generate duplicate identifiers. Notes from the 20090602 Albuquerque Meeting ------------------------------------------- These lightly edited notes were taken by Bruce Wilson of the group discussion about identifiers during the VDC-TWG 20090602 Albuquerque Meeting. Original notes are located in subversion at: /documents/Projects/VDC/docs/20090602_04_ABQ_Meeting Design Goals ~~~~~~~~~~~~ From the DataONE perspective, an identifier is opaque. DataONE does not attach any meaning or resolution protocol based on the identifier. A call to return the object associated with a particular identifier should always return either identically the same object or n/a if that object is no longer available. This raises a number of implementation issues, noted below. Particular issues include how to handle data which is regularly updated and things like status changes. A Member Node may use its own internal identification scheme, but must be able to retrieve an object based on its DataONE globally unique identifier. Member Nodes may generate their own unique identifiers, such as DOIs_, Handles_, PURLs_, or UUIDs_. The only requirement is that the identifier is unique across the space of DataONE. This implies that CN's must have functionality to: .. _DOIs: http://www.doi.org/ .. _Handles: http://www.handle.net/ .. _PURLs: http://purl.org/docs/index.html .. _UUIDs: http://en.wikipedia.org/wiki/UUID (a) check that an identifier is unique and (b) to "reserve" or stub-out an identifier while the MN goes through the process of assembling the package to submit the object into DataONE. When an object is replicated from one MN to another MN, the receiving MN must be able to accept and resolve the supplied DataONE identifier. That is, an object, no matter where it is within the DataONE network must be retrievable by its DataONE identifier, regardless of location. There was a lot of discussion on this point, and this is my interpretation of the conclusion. I believe we came out with the point that if a receiving Member Node assigns its own permanent identifier, then that creates more confusion, requires the MN to register that second ID with the CNs, and we can have confusion regarding the citation (for example) of the piece of data. It also makes tracking things like metrics, since the originating MN must then find out all other identifiers for the data and search for all of those. And while it can be argued that nobody "owns" the data, there is (currently) a culture and need for the original archive to feel like it still can receive credit for that investment. A system doesn't need to maintain every version, but it does need to be able to identify every version. Identifiers also apply to metadata as well as data. Questions for Further Consideration ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ If a MN uses a DOI for a data set identifier, is it appropriate to include doi: in the identifier. For example, 10.3334/ORNLDAAC/840 is the DOI for a particular data set at the ORNL DAAC. Both "doi:10.3334/ORNLDAAC/840" and "10.3334/ORNLDAAC/840" can be presumed to be unique identifiers. Which should be used? BEW: My personal preference is to use the one with the resolution protocol included. That does, however, make the identifier more of a "smart" identifier, which is generally problematic. Where an identifier has a mechanism to resolve to multiple locations (such as is possible with an LSID and some DOI mechanisms) and that object is replicated from one MN to another MN, this would suggest that the originating MN needs to be notified of the additional location and has the option of registering the new location with the handle registration authority. This also means that if a replication is removed, the original MN should have the option of being notified, so that the resolution points are updated. Ideally, this should happen before the replica is removed (where possible), so that we eliminate (or at least minimize) the amount of time that an invalid resolution point is in someone else's system. Where an identifier (such as a Handle) has a URL resolution, what should that resolution be? ORNL DAAC DOI's resolve to a web page where a user (after logging in) can see and download the components of the data set. Our opinion is that the DOI resolving to a human interpretable description of the object is more important than a machine interpretable resolution point. Some thought and guidance on this point for the overall DataONE community of practice is desirable. Do we want/need a registry of name spaces? Where a MN uses a UUID (for example), there may not be a way to describe the name space for identifiers, unless the MN prefixes the UUID with some descriptor, which generally violates the general admonition about smart identifiers. It might, however, be helpful to have something like a set of regexps that describe the name space for a MN's identifiers, particularly if an automated way could be developed to look for potential collisions (non-null overlaps) between name spaces. BEW: My thought is that this is far from an initial feature, but the desirability of this as a possible future feature could have implications on the way we do things from the start. Can the metadata standards support multiple globally unique identifiers? For example, what happens in the case that a MN starts down the DOI path and then switches to LSID's because of economic costs, for example, and goes back and assigns an LSID to historical data sets. Those data sets now have both an LSID and a DOI. Where is this in the metadata? Is there a mechanism for indicating the preferred ID and the alternate ID's? Likewise, how should things be handled when a MN decides to register an object with e.g. GCMD and the namespace that GCMD allows for identifiers does not allow for the MN's preferred identifier. Can a MN update the metadata to show an alternate key with the GCMD identifier (data set is also known as)? What is the implication for the metadata identifier in such a case? This is an update operation to the metadata, which implies that the metadata identifier is changed. How would one update the old metadata record to indicate that it is: (a) deprecated and (b) the id of the new metadata record? The above also relates to the issue of establishing predecessor-successor relationships between identifiers. How should this be done across the system? How do versions enter into the identifiers scheme? The general concept is that different versions of an object have different identifiers. What about having some type of an identifier that aggregates all versions of an object and which always points to the latest version of that object? How does D1 know that an object is a new version of an existing object? Update operation should take the old identifier and the new identifier. That would allow for the tracking of updates. A Member Node may track versions. Could create an interface specification for "latest version" where the CN calls the authoritative MN for the DS and asks for the identifier of the latest version of a particular identifier. Points back to the need for what amounts to meta-metadata - where the metadata object can be updated to indicate the status level of the data set (e.g. deprecated). Where is the identifier for something like World Ocean Data Base - this gets updated quarterly. They think of the fundamental unit as an observation point, which is either a location (e.g. buoy, possibly with different identifiers for different depths) or a leg of a trip, with multiple observations along a path. For identifiers, we may need to specify the character space. What happens when a MN stores unique identifiers in a database field that supports just ASCII, but a different MN does its unique identifiers in some other character set? PURL is a possible unique identifier, but we can get into cases now where URLs have characters from other language character sets (such as Arabic, Kanji, Ö) What happens when a request for a replicated version of a data set comes to the replicate MN and the data set has been updated and the originating MN has not supplied the information about the update (e.g. they did an insert for the new version)? How do we assign ID's for a continuous data stream or for a subset calculated on the fly? Does this mean that every request for a continuous data stream gets its own data set identifier, which then gets stored in the D1 system someplace? What is the value to the overall enterprise for storing the data set identifiers for each request, particularly in the context of something like a stream, where the on-the-fly processing is used to get a dynamic subset or dynamic reprojection? Examples of this sort of situation include the stream gauge data or the Atmospheric Radiation Measurement (ARM) archive. Ameriflux Flux tower data is a simpler case, in that they work on the basis of a site-year as a unit of data. The World Oceanic DataBase (WODB), however, operates on a location (and possibly depth) as a unit of data. Many of these are updated quarterly. Each unit of data has an identifier, unique within WODB, and WODB publishes a data stream that indicates what data packages were updated at what point in time. It is possible to determine whether a particular data package changed between two points in time. The differences are human interpretable, but it is not possible (in any generally automated fashion) to recreate the data stream for a particular data package at an arbitrary point in prior time. Do the CN's need a method to determine the object type for an identifier? Do identifiers need to be unique across all types of identified objects?h0jh1h2h3Ucommentrh5}r(j1j2h:]h9]h7]h8]h<]uh>Mh?hh*]rhHXx7OLD Notes follow, preserved here for now but likely to be removed Suggested Strategy ------------------ 1. DataONE supports all identifier schemes where the PID can be represented as a Unicode string (this should be any identifier). 2. The original identifier first assigned by a Member Node is the identifier promoted as the authoritative identifier for that content. Other identifiers that may be assigned by MNs that don't support the original scheme will be mapped to the original. 3. If the original MN discontinues participation in DataONE, then the identifier originally used remains as the authoritative identifier. 4. Any identifiers in use by the DataONE system can be resolved at any node (CN or MN). A caching system (e.g. memcached) should be used to improve resolution performance (can be primed with existing IDs). This strategy will enable the use of any identifier that is represented by a string, and will persist the original identifier for the object regardless of what happens to the originating Member Node. An obvious concern with this strategy is that a single object may have multiple identifiers associated with it. Since the original identifier is persisted, however, it will be the primary identifier by which that content will be referenced, regardless of which node the object is located on. .. @startuml images/resolve.png title Resolve PID actor User participant "CRUD API" as m_crud << Member Node >> participant "Cache" as m_cache << Member Node >> participant "CRUD API" as cn_crud << Coordinating Node >> participant "Directory" as cn_dir << Coordinating Node >> User -> m_crud: resolve(token, "A5548D") m_crud -> m_cache: cache_lookup("A5548D") m_cache --> m_crud: FAIL m_crud -> cn_crud: resolve(token, "A5548D") cn_crud -> cn_dir: lookup("A5548D") cn_dir --> cn_crud: metadata cn_crud --> m_crud: metadata m_crud --> m_cache: addEntry("A5548D", metadata) m_crud --> User: metadata @enduml .. image:: images/resolve.png *Figure 1.* Resolving a PID. In this scenario a user is trying to determine what the ID "A5548D" refers to, and uses the resolution service of a Member Node to that effect. .. @startuml images/resolve-detail.png title Resolve PID Detail actor User participant "CRUD API" as m_crud << Member Node >> participant "Cache" as m_cache << Member Node >> participant "CRUD API" as cn_crud << Coordinating Node >> participant "Directory" as cn_dir << Coordinating Node >> participant "CRUD API" as m_crud2 << Member Node 2 >> User -> m_crud: get(token, "A5548D") m_crud -> m_cache: lookup("A5548D") note right Local resolve failed, defer to CN endnote m_cache --> m_crud: FAIL m_crud -> cn_crud: resolve(token, "A5548D") cn_crud -> cn_dir: lookup("A5548D") cn_dir --> cn_crud: metadata cn_crud --> m_crud: metadata m_crud --> m_cache: addEntry(GUID, metadata) m_crud -> m_crud: parseMetadata(metadata) note right Found data URL = http://mn2.dataone.org/objects/A4448D endnote m_crud --> User: HTTP 302: http://mn2.dataone.org/objects/A4448D note right Return a redirect to the MN 2 get object interface for the specified object. endnote User -> m_crud2: GET "http://mn2.dataone.org/objects/A4448D" m_crud2 --> User: bytes @enduml .. image:: images/resolve-detail.png *Figure 2.* Detail for object retrieval of an object identified by a PID. In this case, the User is requesting a data object from MN 1, though the data is actually located on MN 2. .. @startuml images/resolve-conflict.png title Conflicting IDs participant "MN_A" as mn_a participant "MN_B" as mn_b participant "CN" as cn participant "CN OStore" as cn_os mn_a -> cn: registerID("435") cn -> cn_os: store("mn_a:435") cn_os <-- cn: ACK mn_a <-- cn: ACK mn_b -> cn: registerID("435") cn -> cn_os: store("mn_b:435") cn_os <-- cn: ACK mn_b <-- cn: ACK actor user user -> cn: resolve("435") user <-- cn: "mn_a:435", "mn_b:435" @enduml .. image:: images/resolve-conflict.png *Figure 3.* A scenario where two MNs happen to add different content to the system with the same identifier. Resolving the identifier without including the namespace results in two matches that must be interpreted by the client. The likelihood of such a scenario should be low, given that MNs should be utilizing identifier schemes that under ideal circumstances should not generate duplicate identifiers. Notes from the 20090602 Albuquerque Meeting ------------------------------------------- These lightly edited notes were taken by Bruce Wilson of the group discussion about identifiers during the VDC-TWG 20090602 Albuquerque Meeting. Original notes are located in subversion at: /documents/Projects/VDC/docs/20090602_04_ABQ_Meeting Design Goals ~~~~~~~~~~~~ From the DataONE perspective, an identifier is opaque. DataONE does not attach any meaning or resolution protocol based on the identifier. A call to return the object associated with a particular identifier should always return either identically the same object or n/a if that object is no longer available. This raises a number of implementation issues, noted below. Particular issues include how to handle data which is regularly updated and things like status changes. A Member Node may use its own internal identification scheme, but must be able to retrieve an object based on its DataONE globally unique identifier. Member Nodes may generate their own unique identifiers, such as DOIs_, Handles_, PURLs_, or UUIDs_. The only requirement is that the identifier is unique across the space of DataONE. This implies that CN's must have functionality to: .. _DOIs: http://www.doi.org/ .. _Handles: http://www.handle.net/ .. _PURLs: http://purl.org/docs/index.html .. _UUIDs: http://en.wikipedia.org/wiki/UUID (a) check that an identifier is unique and (b) to "reserve" or stub-out an identifier while the MN goes through the process of assembling the package to submit the object into DataONE. When an object is replicated from one MN to another MN, the receiving MN must be able to accept and resolve the supplied DataONE identifier. That is, an object, no matter where it is within the DataONE network must be retrievable by its DataONE identifier, regardless of location. There was a lot of discussion on this point, and this is my interpretation of the conclusion. I believe we came out with the point that if a receiving Member Node assigns its own permanent identifier, then that creates more confusion, requires the MN to register that second ID with the CNs, and we can have confusion regarding the citation (for example) of the piece of data. It also makes tracking things like metrics, since the originating MN must then find out all other identifiers for the data and search for all of those. And while it can be argued that nobody "owns" the data, there is (currently) a culture and need for the original archive to feel like it still can receive credit for that investment. A system doesn't need to maintain every version, but it does need to be able to identify every version. Identifiers also apply to metadata as well as data. Questions for Further Consideration ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ If a MN uses a DOI for a data set identifier, is it appropriate to include doi: in the identifier. For example, 10.3334/ORNLDAAC/840 is the DOI for a particular data set at the ORNL DAAC. Both "doi:10.3334/ORNLDAAC/840" and "10.3334/ORNLDAAC/840" can be presumed to be unique identifiers. Which should be used? BEW: My personal preference is to use the one with the resolution protocol included. That does, however, make the identifier more of a "smart" identifier, which is generally problematic. Where an identifier has a mechanism to resolve to multiple locations (such as is possible with an LSID and some DOI mechanisms) and that object is replicated from one MN to another MN, this would suggest that the originating MN needs to be notified of the additional location and has the option of registering the new location with the handle registration authority. This also means that if a replication is removed, the original MN should have the option of being notified, so that the resolution points are updated. Ideally, this should happen before the replica is removed (where possible), so that we eliminate (or at least minimize) the amount of time that an invalid resolution point is in someone else's system. Where an identifier (such as a Handle) has a URL resolution, what should that resolution be? ORNL DAAC DOI's resolve to a web page where a user (after logging in) can see and download the components of the data set. Our opinion is that the DOI resolving to a human interpretable description of the object is more important than a machine interpretable resolution point. Some thought and guidance on this point for the overall DataONE community of practice is desirable. Do we want/need a registry of name spaces? Where a MN uses a UUID (for example), there may not be a way to describe the name space for identifiers, unless the MN prefixes the UUID with some descriptor, which generally violates the general admonition about smart identifiers. It might, however, be helpful to have something like a set of regexps that describe the name space for a MN's identifiers, particularly if an automated way could be developed to look for potential collisions (non-null overlaps) between name spaces. BEW: My thought is that this is far from an initial feature, but the desirability of this as a possible future feature could have implications on the way we do things from the start. Can the metadata standards support multiple globally unique identifiers? For example, what happens in the case that a MN starts down the DOI path and then switches to LSID's because of economic costs, for example, and goes back and assigns an LSID to historical data sets. Those data sets now have both an LSID and a DOI. Where is this in the metadata? Is there a mechanism for indicating the preferred ID and the alternate ID's? Likewise, how should things be handled when a MN decides to register an object with e.g. GCMD and the namespace that GCMD allows for identifiers does not allow for the MN's preferred identifier. Can a MN update the metadata to show an alternate key with the GCMD identifier (data set is also known as)? What is the implication for the metadata identifier in such a case? This is an update operation to the metadata, which implies that the metadata identifier is changed. How would one update the old metadata record to indicate that it is: (a) deprecated and (b) the id of the new metadata record? The above also relates to the issue of establishing predecessor-successor relationships between identifiers. How should this be done across the system? How do versions enter into the identifiers scheme? The general concept is that different versions of an object have different identifiers. What about having some type of an identifier that aggregates all versions of an object and which always points to the latest version of that object? How does D1 know that an object is a new version of an existing object? Update operation should take the old identifier and the new identifier. That would allow for the tracking of updates. A Member Node may track versions. Could create an interface specification for "latest version" where the CN calls the authoritative MN for the DS and asks for the identifier of the latest version of a particular identifier. Points back to the need for what amounts to meta-metadata - where the metadata object can be updated to indicate the status level of the data set (e.g. deprecated). Where is the identifier for something like World Ocean Data Base - this gets updated quarterly. They think of the fundamental unit as an observation point, which is either a location (e.g. buoy, possibly with different identifiers for different depths) or a leg of a trip, with multiple observations along a path. For identifiers, we may need to specify the character space. What happens when a MN stores unique identifiers in a database field that supports just ASCII, but a different MN does its unique identifiers in some other character set? PURL is a possible unique identifier, but we can get into cases now where URLs have characters from other language character sets (such as Arabic, Kanji, Ö) What happens when a request for a replicated version of a data set comes to the replicate MN and the data set has been updated and the originating MN has not supplied the information about the update (e.g. they did an insert for the new version)? How do we assign ID's for a continuous data stream or for a subset calculated on the fly? Does this mean that every request for a continuous data stream gets its own data set identifier, which then gets stored in the D1 system someplace? What is the value to the overall enterprise for storing the data set identifiers for each request, particularly in the context of something like a stream, where the on-the-fly processing is used to get a dynamic subset or dynamic reprojection? Examples of this sort of situation include the stream gauge data or the Atmospheric Radiation Measurement (ARM) archive. Ameriflux Flux tower data is a simpler case, in that they work on the basis of a site-year as a unit of data. The World Oceanic DataBase (WODB), however, operates on a location (and possibly depth) as a unit of data. Many of these are updated quarterly. Each unit of data has an identifier, unique within WODB, and WODB publishes a data stream that indicates what data packages were updated at what point in time. It is possible to determine whether a particular data package changed between two points in time. The differences are human interpretable, but it is not possible (in any generally automated fashion) to recreate the data stream for a particular data package at an arbitrary point in prior time. Do the CN's need a method to determine the object type for an identifier? Do identifiers need to be unique across all types of identified objects?rr}r(h/Uh0jubaubeubeubah/UU transformerrNU footnote_refsr}r(X1]rjaX3]rjcaX2]rjXaX5]r(jtjeX4]rjauUrefnamesr}r(X1]rjaX3]rjcaX2]rjXaX5]r(jtjeX4]rjaXrfc3986r]rjD)r}r(h/jSh5}r(UnameXRFC3986h:]h9]h7]Urefnamerjh8]h<]uh0j7h*]rhHXRFC3986rr}r(h/Uh0jubah3jLubauUsymbol_footnotesr]rUautofootnote_refsr]rUsymbol_footnote_refsr]rU citationsr]rh?hU current_linerNUtransform_messagesr]r(cdocutils.nodes system_message r)r}r(h/Uh5}r(h7]UlevelKh:]rjXah9]rjWaUsourceh2h8]h<]UlineKUtypeUERRORruh*]rhL)r}r(h/Uh5}r(h7]h8]h9]h:]h<]uh0jh*]rhHXUnknown target name: "rfc3986".rr}r(h/Uh0jubah3hPubah3Usystem_messagerubj)r}r(h/Uh5}r(h7]UlevelKh:]h9]Usourceh2h8]h<]UlineMvUtypeUINFOruh*]rhL)r}r(h/Uh5}r(h7]h8]h9]h:]h<]uh0jh*]rhHX*Hyperlink target "guid" is not referenced.rr}r(h/Uh0jubah3hPubah3jubj)r}r(h/Uh5}r(h7]UlevelKh:]h9]Usourceh2h8]h<]UlineMyUtypejuh*]rhL)r}r(h/Uh5}r(h7]h8]h9]h:]h<]uh0jh*]rhHX-Hyperlink target "ogc wkt" is not referenced.rr}r(h/Uh0jubah3hPubah3jubeUreporterrNUid_startrKU autofootnotesr]rU citation_refsr }r Uindirect_targetsr ]r Usettingsr (cdocutils.frontend Values ror}r(Ufootnote_backlinksrKUrecord_dependenciesrNU rfc_base_urlrUhttps://tools.ietf.org/html/rU tracebackrUpep_referencesrNUstrip_commentsrNU toc_backlinksrUentryrU language_coderUenrU datestamprNU report_levelrKU _destinationrNU halt_levelrKU strip_classesr NhENUerror_encoding_error_handlerr!Ubackslashreplacer"Udebugr#NUembed_stylesheetr$Uoutput_encoding_error_handlerr%Ustrictr&U sectnum_xformr'KUdump_transformsr(NU docinfo_xformr)KUwarning_streamr*NUpep_file_url_templater+Upep-%04dr,Uexit_status_levelr-KUconfigr.NUstrict_visitorr/NUcloak_email_addressesr0Utrim_footnote_reference_spacer1Uenvr2NUdump_pseudo_xmlr3NUexpose_internalsr4NUsectsubtitle_xformr5U source_linkr6NUrfc_referencesr7NUoutput_encodingr8Uutf-8r9U source_urlr:NUinput_encodingr;U utf-8-sigr<U_disable_configr=NU id_prefixr>UU tab_widthr?KUerror_encodingr@UUTF-8rAU_sourcerBh2Ugettext_compactrCU generatorrDNUdump_internalsrENU smart_quotesrFU pep_base_urlrGU https://www.python.org/dev/peps/rHUsyntax_highlightrIUlongrJUinput_encoding_error_handlerrKj&Uauto_id_prefixrLUidrMUdoctitle_xformrNUstrip_elements_with_classesrONU _config_filesrP]Ufile_insertion_enabledrQU raw_enabledrRKU dump_settingsrSNubUsymbol_footnote_startrTKUidsrU}rV(hjvh j-hj"hjh#jMhjfjjh!j/jjjxjtj\jXjgjcj%jh%jh$jh"hjWjQh(jhjCjXjhj)h&jh)jh'h-uUsubstitution_namesrW}rXh3h?h5}rY(h7]h:]h9]Usourceh2h8]h<]uU footnotesrZ]r[(j/jMjfjjeUrefidsr\}r]ub.