Giter Club home page Giter Club logo

openxliff's People

Contributors

rmraya avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

openxliff's Issues

SAXException when converting docx file

For certain file with lots of tags, an exception occurs on converting:

Steps:
./convert.sh -file notice.docx -srcLang en -tgtLang sv -2.0
(file is attached)

Output:

Oct 24, 2019 11:45:17 AM com.maxprograms.xml.CustomErrorHandler fatalError
SEVERE: 1:250 Element type "p" must be followed by either attribute specifications, ">" or "/>".
Oct 24, 2019 11:45:17 AM com.maxprograms.converters.msoffice.MSOffice2Xliff run
SEVERE: Error converting MS Office file
org.xml.sax.SAXException: [Fatal Error] 1:250 Element type "p" must be followed by either attribute specifications, ">" or "/>".
	at openxliff/com.maxprograms.xml.CustomErrorHandler.fatalError(CustomErrorHandler.java:43)
	at java.xml/com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.fatalError(ErrorHandlerWrapper.java:181)
	at java.xml/com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:400)
	at java.xml/com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:327)
	at java.xml/com.sun.org.apache.xerces.internal.impl.XMLScanner.reportFatalError(XMLScanner.java:1471)
	at java.xml/com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.seekCloseOfStartTag(XMLDocumentFragmentScannerImpl.java:1433)
	at java.xml/com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.scanStartElement(XMLNSDocumentScannerImpl.java:242)
	at java.xml/com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2710)
	at java.xml/com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:605)
	at java.xml/com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:112)
	at java.xml/com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:534)
	at java.xml/com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:888)
	at java.xml/com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:824)
	at java.xml/com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:141)
	at java.xml/com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1216)
	at java.xml/com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:635)
	at openxliff/com.maxprograms.xml.SAXBuilder.build(SAXBuilder.java:89)
	at openxliff/com.maxprograms.converters.msoffice.MSOffice2Xliff.writeSegment(MSOffice2Xliff.java:141)
	at openxliff/com.maxprograms.converters.msoffice.MSOffice2Xliff.recursePara(MSOffice2Xliff.java:386)
	at openxliff/com.maxprograms.converters.msoffice.MSOffice2Xliff.recursePhrase(MSOffice2Xliff.java:587)
	at openxliff/com.maxprograms.converters.msoffice.MSOffice2Xliff.recursePhrase(MSOffice2Xliff.java:589)
	at openxliff/com.maxprograms.converters.msoffice.MSOffice2Xliff.recursePhrase(MSOffice2Xliff.java:589)
	at openxliff/com.maxprograms.converters.msoffice.MSOffice2Xliff.recursePhrase(MSOffice2Xliff.java:589)
	at openxliff/com.maxprograms.converters.msoffice.MSOffice2Xliff.recursePhrase(MSOffice2Xliff.java:589)
	at openxliff/com.maxprograms.converters.msoffice.MSOffice2Xliff.recursePhrase(MSOffice2Xliff.java:589)
	at openxliff/com.maxprograms.converters.msoffice.MSOffice2Xliff.recursePhrase(MSOffice2Xliff.java:589)
	at openxliff/com.maxprograms.converters.msoffice.MSOffice2Xliff.recursePhrase(MSOffice2Xliff.java:589)
	at openxliff/com.maxprograms.converters.msoffice.MSOffice2Xliff.recursePhrase(MSOffice2Xliff.java:589)
	at openxliff/com.maxprograms.converters.msoffice.MSOffice2Xliff.recursePhrase(MSOffice2Xliff.java:589)
	at openxliff/com.maxprograms.converters.msoffice.MSOffice2Xliff.recursePhrase(MSOffice2Xliff.java:589)
	at openxliff/com.maxprograms.converters.msoffice.MSOffice2Xliff.recursePara(MSOffice2Xliff.java:419)
	at openxliff/com.maxprograms.converters.msoffice.MSOffice2Xliff.recurse(MSOffice2Xliff.java:283)
	at openxliff/com.maxprograms.converters.msoffice.MSOffice2Xliff.recurse(MSOffice2Xliff.java:285)
	at openxliff/com.maxprograms.converters.msoffice.MSOffice2Xliff.run(MSOffice2Xliff.java:97)
	at openxliff/com.maxprograms.converters.office.Office2Xliff.run(Office2Xliff.java:131)
	at openxliff/com.maxprograms.converters.Convert.run(Convert.java:366)
	at openxliff/com.maxprograms.converters.Convert.main(Convert.java:238)

notice.docx

Possible performance improvement for Segmenter.segment

Hi. I noticed that for a particular docx file, the convert process takes a long time (about 20 minutes on my computer). The file is not huge (258kB), but it probably has some unusually large section. I have attached the file. I have only seen the problem on this file, but I still thought that it would be good to report it. The file is notice.docx
Example: ./convert.sh -file notice.docx -srcLang en -tgtLang sv -2.0

I have pinned down the bottleneck to Segmenter.segment, in which the time consumer seems to be
the calls to hideTags(pureText.substring(...));. The length of pureText was 16681 in this case.

So there are three levels of nested loops:
for-loop over pureText, while-loop in hideTags(), and string.substring() in the while-loop.
Each loop level has the length of 16681 in this case, which explains the high time consumption.

Perhaps it is possible to solve with StringBuilder or in some other way changing the string handling.

Feature request: Hide leading whitespace

Currently any whitespace before a sentence and between sentences is included in the translatable segment, but not the trailing whitespace (see example below). The suggestion is that all leading and trailing whitespace is hidden from translation.
(It is of course easy to hide it the CAT tool, but it could be an improvement for the OpenXLIFF library anyway.)

With an example document like this, with spaces before, between, and after the sentences:
" Sentence one. Sentence two. "

you get xliff like this:

<source xml:space="preserve">   Sentence one.</source>
...
<source xml:space="preserve"> Sentence two.</source>

(I.e. spaces are there, except after sentence two. Is suppose that the last whitespace is hidden in the skeleton?)

XLIFF 2.0 validator invalidate user defined subState/subType values

According to XLIFF 2.0 specification, XLIFF user can define subState/subType values with custom namespaces.

For example, example XLIFF below defines a custom namespace abc and abc:mt is used for subState attribute in <segment> element.

<?xml version="1.0" encoding="UTF-8"?>
<xliff xmlns="urn:oasis:names:tc:xliff:document:2.0" version="2.0"
 srcLang="en" trgLang="ja" xmlns:abc="http://example.com/xliff/abc">
<file id="f1">
<unit id="u1">
  <segment id="s1" state="translated" subState="abc:mt">
    <source>Hello</source>
    <target>こんにちは</target>
  </segment>
</unit>
</file>
</xliff>

My understanding is that this is supported by XLIFF 2.0 specification. However, the validator returns an error - Invalid prefix 'abc' in "subState" attribute

com.maxprograms.validation.Xliff20 has a list of known prefixes (namespaces) as below.

	private List<String> knownPrefixes = Arrays.asList("xlf", "mtc", "gls", "fs", "mda", "res", "ctr", "slr", "val",
			"its", "my");

The list is fixed, so any other prefixes not included in this list will be invalidated. BTW, "my" in this list is not defined by XLIFF specification, but it is used in some examples in the specification.
The validator probably should append namespaces declared in <xliff> element to knownPrefixes for validating subState/subType values.

Make ILogger deterministic for some stages of processing

Hello,

We are currently building an Oxygen XML Editor plugin that integrates Fluenta into the DITA Maps Manager. We are using {{com.maxprograms.fluenta.API}} for calling Fluenta operations. I've noticed that there is a {{com.maxprograms.tmengine.ILogger}} interface for notifying progress. The progress is indeterminate, but for some stages it looks like it could become deterministic. For example, in {{DitaMap2Xliff}} after setting the stage to Processing Files you know how many files will be processed.

Hello, IDML files to Xliff issue

When the IDML file convert to Xliff file, the order of open tag and close tag (of the element "Content" and "CharacterStyleRange") are reversed.
Can you fix it?

Performance improvement for convert with -embed and -2.0

I noticed that the -2.0 option in combination with -embed (which is my use case) makes the Convert step takes a very long time. Maybe you are already aware of it, but anyway here comes some measurements and a small investigation.
Some examples (all done with the same test.docx, (4.3 MB, 4400 words)):

without -embed flag:
./convert.sh -file test.docx -srcLang da -tgtLang sv -2.0
13 seconds

without -2.0 flag:
./convert.sh -file test.docx -srcLang da -tgtLang sv -embed
13 seconds

with both -2.0 and -embed flag:
./convert.sh -file test.docx -srcLang da -tgtLang sv -2.0 -embed
77 seconds

I did some debugging and it is one particular call to com.maxprograms.xml.Element.mergeText() that takes about 1 minute to complete, and it seems like the bottleneck is this line:
https://github.com/rmraya/OpenXLIFF/blob/master/src/com/maxprograms/xml/Element.java#L167

When the mergeText() is run for the <internal-file> element (i.e all the base64 skeleton data), the content member is a big vector (37000 lines in my case) which is then concatenated line by line to a new string:
t.setText(t.getText() + ((TextNode) n).getText());

It can probably be improved fairly easily, so that it runs almost instantly, for example by using a StringBuilder for concatenation.

Segmentation rules are being ignored.

I don't know when it exactly started but from at least version 3.17 the segmentation rules are being ignored. No matter if default.srx (by not specifying any) or with -srx my.srx, they are not applied. Updated to 3.20, the same. I tried with v 3.6 which I had on my disk, that is working fine.

-srx option doesn't work

I used LanguageTool's SRX file and can't see the extracted file is segmented according to the SRX file. I've compared the unsegmented with the segmented one but still can't see the difference

command I ran: ./convert.sh -file ~/Documents/sample.xliff -srcLang en -srx ~/Documents/languagetoolorg-srx.srx

I've attached sample file

Archive.zip

IDML to XLIFF identify page

First of all thanks for this great toolsuite 👍
I've converted an IDML file -> XLIFF and I want to know is there any way to identity the pages to which the translation units belong ?

Invalid result after merge operation (related to Element#mergeText)

Hello, thank you for sharing this tool.

I found a problem related to merge operation, I'll try to describe it in this issue

How to reproduce

test.skl

<?xml version="1.0" encoding="UTF-8"?>
<ROOT>
  <NIV1>%%%1%%%
</NIV1>
</ROOT>

test.xlf

<?xml version="1.0" encoding="UTF-8"?>
<xliff xmlns="urn:oasis:names:tc:xliff:document:2.0" xmlns:mtc="urn:oasis:names:tc:xliff:matches:2.0" xmlns:mda="urn:oasis:names:tc:xliff:metadata:2.0" srcLang="fr" version="2.1" trgLang="en-US">
  <file original="test.xml" id="1">
    <skeleton href="test.skl"/>
    <mda:metadata>
      <mda:metaGroup category="format">
        <mda:meta type="datatype">xml</mda:meta>
      </mda:metaGroup>
      <mda:metaGroup category="tool">
        <mda:meta type="tool-id">OpenXLIFF</mda:meta>
        <mda:meta type="tool-name">OpenXLIFF Filters</mda:meta>
        <mda:meta type="tool-version">3.15.0 20230913_0710</mda:meta>
      </mda:metaGroup>
      <mda:metaGroup category="PI">
        <mda:meta type="encoding">UTF-8</mda:meta>
      </mda:metaGroup>
    </mda:metadata>
    <unit id="1">
      <mda:metadata id="1">
        <mda:metaGroup category="attributes" id="ph0">
          <mda:meta type="ctype">x-bold</mda:meta>
        </mda:metaGroup>
      </mda:metadata>
      <originalData>
        <data id="ph0">&lt;G&gt;</data>
        <data id="ph1">&lt;/G&gt;</data>
      </originalData>
      <ignorable>
        <source xml:space="preserve">
          <ph id="ph0"/>
        </source>
      </ignorable>
      <segment state="final" id="1-0">
        <source xml:space="preserve"> Bonjour. </source>
        <target> Hello. </target>
      </segment>
      <ignorable>
        <source xml:space="preserve">
          <ph id="ph1"/>
        </source>
      </ignorable>
      <segment state="final" id="1-1">
        <source xml:space="preserve"> Ce text devrait être traduit </source>
        <target> This text should be translated </target>
      </segment>
    </unit>
  </file>
</xliff>

command

./merge.sh -xliff test.xlf -target result.xml

actual xml result

<?xml version="1.0" encoding="UTF-8"?>
<ROOT>
  <NIV1>
          <G>
         Bonjour. 
           Hello. 
          </G>
         Ce text devrait être traduit  This text should be translated </NIV1>
</ROOT>

expected xml result

<?xml version="1.0" encoding="UTF-8"?>
<ROOT>
  <NIV1><G> Hello. </G> This text should be translated </NIV1>
</ROOT>

Investigation

First, I observed that this issue appears because I formatted the xliff file.
If I rollback some changes, the result is correct. Rolled back changes are:

      <ignorable>
        <source xml:space="preserve">
          <ph id="ph0"/>
        </source>
      </ignorable>

      <ignorable>
        <source xml:space="preserve"><ph id="ph0"/></source>
      </ignorable>

(needed as well for ph1)

Looking into the code, I think I understand the problem.

In https://github.com/rmraya/OpenXLIFF/blob/v3.15.0/src/com/maxprograms/xliff2/FromXliff2.java#L294-L326

We have:

  • l. 301
joinedSource.addContent(src.getContent());
  • l. 308
joinedTarget.addContent(src.getContent());

At this point joinedSource and joinedTarget point to the same content (same java objects per references)

Then l. 326 we have:

src.setContent(harvestContent(joinedSource, tags, attributes));

Which internally calls joinedSource.getContent(); which itself calls Element#mergeText.

And this is precisely were we have an issue.
When harvesting for the source we call the mergeText function that mutates some XMLnodes that are also referenced by the target. After harvesting for the source, the content of target becomes invalid.

To confirm this hypothesis, I simply removed the call to #mergeText in Element#getContent and it actually fixes the problem.

Solution

I don't know what would be the best to fix this issue, I think there are different possibilities like:

  • Not mutating state in a method that apparently only reads data (we don't expect #getContent to mutate the state of Element)
    • I saw that this method was also called in the equals method, which can be dangerous as well
  • Not sharing data between 2 Element objets, in which case that would requires to copy XMLNodes

If you want me to share a PR with a fix proposal, feel free to ask.

By the way, I think it could be very helpful to introduce a pom.xml in order to have maven dependencies (in which case it would be necessary to publish XMLJava in maven central)

compilation: unmappable character for encoding US-ASCII

Hello,

I had an issue compiling OpenXLIFF from master with ant, but I managed to fix it by setting the javac compiler option below in build.xml. Would you like me to submit a pull request?


Problem

# ant
  ...
compile:
    [javac] Compiling 112 source files to /root/OpenXLIFF-master/bin
    [javac] /root/OpenXLIFF-master/src/com/maxprograms/converters/xml/Xml2Xliff.java:842: error: unmappable character (0xC2) for encoding US-ASCII
    [javac] 			if (" \u00A0\r\n\f\t\u2028\u2029,.;\":<>?????!()[]{}=+/*\u00AB\u00BB\u201C\u201D\u201E\uFF00"
    [javac] 			                                        ^
    [javac] /root/OpenXLIFF-master/src/com/maxprograms/converters/xml/Xml2Xliff.java:842: error: unmappable character (0xBF) for encoding US-ASCII
    [javac] 			if (" \u00A0\r\n\f\t\u2028\u2029,.;\":<>?????!()[]{}=+/*\u00AB\u00BB\u201C\u201D\u201E\uFF00"
    [javac] 			                                         ^
    [javac] /root/OpenXLIFF-master/src/com/maxprograms/converters/xml/Xml2Xliff.java:842: error: unmappable character (0xC2) for encoding US-ASCII
    [javac] 			if (" \u00A0\r\n\f\t\u2028\u2029,.;\":<>?????!()[]{}=+/*\u00AB\u00BB\u201C\u201D\u201E\uFF00"
    [javac] 			                                           ^
    [javac] /root/OpenXLIFF-master/src/com/maxprograms/converters/xml/Xml2Xliff.java:842: error: unmappable character (0xA1) for encoding US-ASCII
    [javac] 			if (" \u00A0\r\n\f\t\u2028\u2029,.;\":<>?????!()[]{}=+/*\u00AB\u00BB\u201C\u201D\u201E\uFF00"
    [javac] 			                                            ^
    [javac] 4 errors

Fix

in build.xml:

<javac srcdir="src" destdir="bin" classpathref="OpenXLIFF.classpath" modulepathref="OpenXLIFF.classpath" includeAntRuntime="false">
		<compilerarg line="-encoding utf-8" />
</javac>

Environment

# uname -r
5.10.61
# cat /etc/debian_version
11.0
# java --version
openjdk 11.0.12 2021-07-20
OpenJDK Runtime Environment (build 11.0.12+7-post-Debian-2)
OpenJDK 64-Bit Server VM (build 11.0.12+7-post-Debian-2, mixed mode, sharing)
# ant -version
Apache Ant(TM) version 1.10.11 compiled on July 10 2021
# locale
LANG=
LANGUAGE=
LC_CTYPE="POSIX"
LC_NUMERIC="POSIX"
LC_TIME="POSIX"
LC_COLLATE="POSIX"
LC_MONETARY="POSIX"
LC_MESSAGES="POSIX"
LC_PAPER="POSIX"
LC_NAME="POSIX"
LC_ADDRESS="POSIX"
LC_TELEPHONE="POSIX"
LC_MEASUREMENT="POSIX"
LC_IDENTIFICATION="POSIX"
LC_ALL=

How do I write a given <target> on XLIFF 2.0?

I'm using okapi-lib-xliff2 to read-and-write a .xlf file which generated from OpenXLIFF's ./convert.sh. but after that, Word deems the file is corrupted when I merge the XLIFF file back using OpenXLIFF's ./merge.sh because the okapi-lib has different format of how they write-and-save the XLIFF file.

Any API/Library to write your generated .xlf file after ./convert.sh?

config_dita | Missing config

According to this example
`

<title>Learning Overview topic</title> <title>Objectives</title> When you complete this lesson, you'll know how to do the following: Create a good learning overview topic. Identify clear learning objectives. Add good test items to assess knowledge gained. `

The following config is missing
lcObjective

Newline lost in conversion cycle for some cases

For certain cases, when running convert + merge, without doing any changes to the XLIFF file, the output document is different from the input, in that a newline is lost. I'm wondering whether it is expected behavior when using default.srx, or a bug.

See attached example files:
input: test3.docx

Test?
Example.

output: test3_sv.docx

Test?Example.

Steps:
./convert.sh -file test3.docx -srcLang en -tgtLang sv -2.0 -embed
./merge.sh -xliff test3.docx.xlf -target test3_sv.docx

Version used: latest master branch, bddd767
The provided default.srx was used (but as far as I understand, the segmentation should not affect whether the output and input docs become similar(?))

getting null pointer exception when trying to convert a ditamap to xliff1.2

Hi, I am getting the following stack trace of the error:

Cannot invoke "Object.hashCode()" because "key" is null at java.base/java.util.Hashtable.containsKey(Hashtable.java:353) at openxliff/com.maxprograms.xml.Catalog.resolveEntity(Catalog.java:334) at java.xml/com.sun.org.apache.xerces.internal.util.EntityResolver2Wrapper.resolveEntity(EntityResolver2Wrapper.java:178) at java.xml/com.sun.org.apache.xerces.internal.impl.XMLEntityManager.resolveEntityAsPerStax(XMLEntityManager.java:1026) at java.xml/com.sun.org.apache.xerces.internal.impl.XMLEntityManager.startEntity(XMLEntityManager.java:1307) at java.xml/com.sun.org.apache.xerces.internal.impl.XMLDTDScannerImpl.startPE(XMLDTDScannerImpl.java:732) at java.xml/com.sun.org.apache.xerces.internal.impl.XMLDTDScannerImpl.skipSeparator(XMLDTDScannerImpl.java:2101) at java.xml/com.sun.org.apache.xerces.internal.impl.XMLDTDScannerImpl.scanDecls(XMLDTDScannerImpl.java:2064) at java.xml/com.sun.org.apache.xerces.internal.impl.XMLDTDScannerImpl.scanDTDExternalSubset(XMLDTDScannerImpl.java:299) at java.xml/com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$DTDDriver.dispatch(XMLDocumentScannerImpl.java:1165) at java.xml/com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$DTDDriver.next(XMLDocumentScannerImpl.java:1040) at java.xml/com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$PrologDriver.next(XMLDocumentScannerImpl.java:917) at java.xml/com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:605) at java.xml/com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:112) at java.xml/com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:542) at java.xml/com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:889) at java.xml/com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:825) at java.xml/com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:141) at java.xml/com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1224) at java.xml/com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:637) at openxliff/com.maxprograms.xml.SAXBuilder.build(SAXBuilder.java:170) at openxliff/com.maxprograms.xml.SAXBuilder.build(SAXBuilder.java:69) at openxliff/com.maxprograms.converters.ditamap.ScopeBuilder.recurse(ScopeBuilder.java:117) at openxliff/com.maxprograms.converters.ditamap.ScopeBuilder.recurse(ScopeBuilder.java:172) at openxliff/com.maxprograms.converters.ditamap.ScopeBuilder.recurse(ScopeBuilder.java:172) at openxliff/com.maxprograms.converters.ditamap.ScopeBuilder.recurse(ScopeBuilder.java:172) at openxliff/com.maxprograms.converters.ditamap.ScopeBuilder.recurse(ScopeBuilder.java:172) at openxliff/com.maxprograms.converters.ditamap.ScopeBuilder.recurse(ScopeBuilder.java:119) at openxliff/com.maxprograms.converters.ditamap.ScopeBuilder.recurse(ScopeBuilder.java:172) at openxliff/com.maxprograms.converters.ditamap.ScopeBuilder.recurse(ScopeBuilder.java:172) at openxliff/com.maxprograms.converters.ditamap.ScopeBuilder.buildScope(ScopeBuilder.java:74) at openxliff/com.maxprograms.converters.ditamap.DitaParser.run(DitaParser.java:154) at openxliff/com.maxprograms.converters.ditamap.DitaMap2Xliff.run(DitaMap2Xliff.java:99) at openxliff/com.maxprograms.converters.Convert.run(Convert.java:405) at openxliff/com.maxprograms.converters.Convert.main(Convert.java:280)
It happens in the function: public InputSource resolveEntity(String name, String publicId, String baseURI, String systemId) in Catalog.java.

Apparently the publicId in this case happens to be null whereas the systemId is coming out to be a non null value. And in every case the baseURI is coming out to be null, it seems that it has been set as null by default. So the code ends up at "dtdEntities.containsKey(publicId)" which throws this error. Any ideas how to get about it?

Renaming xliff files results in merged files without suffix

martin@smrad ~/test> ls
test_en.docx test_en.docx.skl test_en.docx.xlf
martin@smrad ~/test> mv test_en.docx.xlf test_BAK.xlf
martin@smrad ~/test> ~/Applications/OpenXLIFF/merge.sh -xliff test_BAK.xlf
martin@smrad ~/test> ls
test_BAK test_BAK.xlf test_en.docx test_en.docx.skl

test_BAK doesn't have the source file's suffix.

But I also think the merged file's name should always follow the source file name and not the XLIFF, i.e, test_en.docx should become test_en_tr.docx no matter what the xlf is named.

Setting to treat SVGs as binary

For the Oxygen Fluenta add-on we have a client using it and the client would like the generated xliff to treat SVGs as binary images as the client is not interested in translating the SVGs. Looking at the code there does not seem to be a setting to state that SVGs should be treated as binary.

StackOverflowException when exporting XLIFF and there is a link inside a list item pointing to itself

Actually inside the list item there are 2 paragraphs, and at some point in the second paragraph there is a cross reference pointing to the first one. I attached a minimal sample: testFluenta.zip.

The error is the following:

Exception in thread "Thread-35" java.lang.StackOverflowError
        at java.base/java.lang.RuntimeException.<init>(RuntimeException.java:52)
        at java.base/java.lang.IllegalArgumentException.<init>(IllegalArgumentException.java:40)
        at java.base/java.util.regex.PatternSyntaxException.<init>(PatternSyntaxException.java:58)
        at java.base/java.util.regex.Pattern.error(Pattern.java:2028)
        at java.base/java.util.regex.Pattern.<init>(Pattern.java:1432)
        at java.base/java.util.regex.Pattern.compile(Pattern.java:1069)
        at java.base/java.lang.String.split(String.java:3155)
        at java.base/java.lang.String.split(String.java:3201)
        at com.maxprograms.converters.ditamap.DitaParser.ditaClass(Unknown Source)
        at com.maxprograms.converters.ditamap.DitaParser.recurse(Unknown Source)
        at com.maxprograms.converters.ditamap.DitaParser.recurse(Unknown Source)
        at com.maxprograms.converters.ditamap.DitaParser.recurse(Unknown Source)
        at com.maxprograms.converters.ditamap.DitaParser.recurse(Unknown Source)
        at com.maxprograms.converters.ditamap.DitaParser.recurse(Unknown Source)
        at com.maxprograms.converters.ditamap.DitaParser.recurse(Unknown Source)
        at com.maxprograms.converters.ditamap.DitaParser.recurse(Unknown Source)
        at com.maxprograms.converters.ditamap.DitaParser.recurse(Unknown Source)
        at com.maxprograms.converters.ditamap.DitaParser.recurse(Unknown Source)
        at com.maxprograms.converters.ditamap.DitaParser.recurse(Unknown Source)
        at com.maxprograms.converters.ditamap.DitaParser.recurse(Unknown Source)
        at com.maxprograms.converters.ditamap.DitaParser.recurse(Unknown Source)
        at com.maxprograms.converters.ditamap.DitaParser.recurse(Unknown Source)
        at com.maxprograms.converters.ditamap.DitaParser.recurse(Unknown Source)
        at com.maxprograms.converters.ditamap.DitaParser.recurse(Unknown Source)
        at com.maxprograms.converters.ditamap.DitaParser.recurse(Unknown Source)
        at com.maxprograms.converters.ditamap.DitaParser.recurse(Unknown Source)

Feature suggestion: Change spelling language of target document to the target language

I noticed that the output document keeps the spelling language of the source document.
Not sure whether this is an easy change or not, but it would be a nice improvement if the output document had the same spelling language as the target language.
Example of current behavior:

  • ./convert.sh -file msdoc.docx -srcLang en -tgtLang sv -2.0 -embed
  • translate the text to Swedish
  • ./merge.sh -xliff msdoc.docx.xlf -target msdoc_sv.docx
  • open the output document msdoc_sv.docx,
  • spelling language is English

For Indian Languages: Source text gets stored in a weird fashion in XLIFF format

For Indian languages like Hindi, Sanskrit etc., apart from the original text, "source" field contains metadata after each word. This is unusual and doesn't happen in case of western languages like English, French German etc. This is problematic because CAT tools present the source field as is in the source language columns.

Screenshot (211)

I am attaching the original text file and the converted XLIFF files as well
OpenXLIFF.zip

Update:
This is happening with OFF files but not in case of TEXT files.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.