com.rapidminer.gui.OperatorDocToHtmlConverter.convert

daniel_thibault
New Altair Community Member
My extension documentation (for consumption by com.rapidminer.gui.OperatorDocumentationBrowser.assignDocumentation) keeps hitting a NullPointerException within com.rapidminer.gui.OperatorDocToHtmlConverter.convert when it calls trans.transform(xmlSource, new StreamResult(buffer)). Now, this happens within java.xml.transform.Transformer so I'm not sure what exactly it's complaining about.
http://rapid-i.com/schemas/documentation/reference/1.0/documentation.xsd was useful to find out the grammar of the expected xml, which I'm following.
Trying to isolate the incorrect bit, I've cut down on the file until there only remains what follows below, but it still fails with a NullPointerException. Any clue?
http://rapid-i.com/schemas/documentation/reference/1.0/documentation.xsd was useful to find out the grammar of the expected xml, which I'm following.
Trying to isolate the incorrect bit, I've cut down on the file until there only remains what follows below, but it still fails with a NullPointerException. Any clue?
<?xml version="1.0" encoding="UTF-8"?>
<p1:documents xmlns:p1="http://rapid-i.com/schemas/documentation/reference/1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://rapid-i.com/schemas/documentation/reference/1.0 http://rapid-i.com/schemas/documentation/reference/1.0/documentation.xsd";>
<operator key="operator.stream_trace" locale="en" version="5.2.008">
<title>Stream LTF Trace</title>
<synopsis>This operator can read a channel from an LTTng (Linux Trace Toolkit Next Generation) trace written in LTF 2.6 by incrementally caching it.</synopsis>
</operator>
</p1:documents>
0
Answers
-
Hi,
can you please provide a stacktrace? Where did you put your XML file?
Best,
Nils0 -
No stack trace, the exception is issued from within javax.xml.transform.Transformer. Within the plugin jar, the xml file is at LTFDataReader/import/data/stream_trace.xml; in the project it lies at resources/LTFDataReader/import/data/stream_trace.xml. The file is found and the stream created; the exception occurs at com.arpidminer.gui.OperatorDocToHtmlConverter.convert(), line 97: "Transformer trans = transFact.newTransformer(xsltSource);". The stack shown in Eclipse is:Nils wrote:
can you please provide a stacktrace? Where did you put your XML file?
The only console output from javax.xml.tansform is ERROR: ''. The exception is a TransformerException with a cause of NullPointerException. I guess I'll see if I can't convince Eclipse to run this in a 1.6 jdk so I can follow the trace deeper in.
Daemon Thread [ProgressThread] (Suspended)
ArrayList<E>.indexOf(Object) line: not available
ArrayList<E>.contains(Object) line: not available
XIncludeAwareParserConfiguration(XML11Configuration).addCommonComponent(XMLComponent) line: not available
XIncludeAwareParserConfiguration(XML11Configuration).<init>(SymbolTable, XMLGrammarPool, XMLComponentManager) line: not available
XIncludeAwareParserConfiguration.<init>(SymbolTable, XMLGrammarPool, XMLComponentManager) line: not available
XIncludeAwareParserConfiguration.<init>() line: not available
SAXParserImpl$JAXPSAXParser(SAXParser).<init>(SymbolTable, XMLGrammarPool) line: not available
SAXParserImpl$JAXPSAXParser(SAXParser).<init>() line: not available
SAXParserImpl$JAXPSAXParser.<init>(SAXParserImpl) line: not available
SAXParserImpl.<init>(SAXParserFactoryImpl, Hashtable, boolean) line: not available
SAXParserImpl.<init>(SAXParserFactoryImpl, Hashtable) line: not available
SAXParserFactoryImpl.newSAXParserImpl() line: not available
SAXParserFactoryImpl.setFeature(String, boolean) line: not available
Parser.parse(InputSource) line: not available
XSLTC.compile(InputSource, String) line: not available
XSLTC.compile(String, InputSource, int) line: not available
TransformerFactoryImpl.newTemplates(Source) line: not available
TransformerFactoryImpl.newTransformer(Source) line: not available
OperatorDocToHtmlConverter.convert(InputStream, Operator) line: 97
OperatorDocumentationBrowser.parseXmlAndReturnHtml(InputStream) line: 190
OperatorDocumentationBrowser.access$1(OperatorDocumentationBrowser, InputStream) line: 188
OperatorDocumentationBrowser$1.run() line: 313
ProgressThread$2.run() line: 189
Executors$RunnableAdapter<T>.call() line: not available
FutureTask$Sync.innerRun() line: not available
FutureTask<V>.run() line: not available
ThreadPoolExecutor.runWorker(ThreadPoolExecutor$Worker) line: not available
ThreadPoolExecutor$Worker.run() line: not available
Thread.run() line: not available0 -
All right, trans.transform(xmlSource, new StreamResult(buffer)); takes us to:
We reach 322, then 325, then 340.
com.sun.org.apache.xalan.internal.xsltc.trax.TransformerImpl
public void transform(Source source, Result result) 313
throws TransformerException
{
if (!_isIdentity) {
if (_translet == null) {
ErrorMsg err = new ErrorMsg(ErrorMsg.JAXP_NO_TRANSLET_ERR); 318
throw new TransformerException(err.toString());
}
// Pass output properties to the translet
transferOutputProperties(_translet); 322
final SerializationHandler toHandler = getOutputHandler(result); 325
if (toHandler == null) {
ErrorMsg err = new ErrorMsg(ErrorMsg.JAXP_NO_HANDLER_ERR); 327
throw new TransformerException(err.toString()); 328
}
if (_uriResolver != null && !_isIdentity) { 331
_translet.setDOMCache(this); 332
}
// Pass output properties to handler if identity
if (_isIdentity) { 336
transferOutputProperties(toHandler); 337
}
transform(source, toHandler, _encoding); 340
try{
if (result instanceof DOMResult) { 342
((DOMResult)result).setNode(_tohFactory.getNode()); 343
} else if (result instanceof StAXResult) { 344
if (((StAXResult) result).getXMLEventWriter() != null) 345
{
(_tohFactory.getXMLEventWriter()).flush(); 347
}
else if (((StAXResult) result).getXMLStreamWriter() != null) { 349
(_tohFactory.getXMLStreamWriter()).flush(); 350
//result = new StAXResult(_tohFactory.getXMLStreamWriter()); 351
}
}
} catch (Exception e) { 354
System.out.println("Result writing error"); 355
}
}
The 708 if fails (is skipped) and so does 726 so we next reach 729. At this point source and handler are defined (not null). But this is where an exception is thrown, so we end up at 735 and the rest is as you'd expect.
com.sun.org.apache.xalan.internal.xsltc.trax.TransformerImpl
private void transform(Source source, SerializationHandler handler, 696
String encoding) throws TransformerException
{
try {
/*
* According to JAXP1.2, new SAXSource()/StreamSource()
* should create an empty input tree, with a default root node.
* new DOMSource()creates an empty document using DocumentBuilder.
* newDocument(); Use DocumentBuilder.newDocument() for all 3
* situations, since there is no clear spec. how to create
* an empty tree when both SAXSource() and StreamSource() are used.
*/
if ((source instanceof StreamSource && source.getSystemId()==null 708
&& ((StreamSource)source).getInputStream()==null &&
((StreamSource)source).getReader()==null)||
(source instanceof SAXSource &&
((SAXSource)source).getInputSource()==null &&
((SAXSource)source).getXMLReader()==null )||
(source instanceof DOMSource &&
((DOMSource)source).getNode()==null)){
DocumentBuilderFactory builderF = FactoryImpl.getDOMFactory(_useServicesMechanism);
DocumentBuilder builder = builderF.newDocumentBuilder(); 717
String systemID = source.getSystemId();
source = new DOMSource(builder.newDocument()); 719
// Copy system ID from original, empty Source to new
if (systemID != null) { 722
source.setSystemId(systemID);
}
}
if (_isIdentity) { 726
transformIdentity(source, handler);
} else {
_translet.transform(getDOM(source), handler); 729
}
} catch (TransletException e) {
if (_errorListener != null) postErrorToListener(e.getMessage()); 732
throw new TransformerException(e);
} catch (RuntimeException e) {
if (_errorListener != null) postErrorToListener(e.getMessage()); 735
throw new TransformerException(e);
} catch (Exception e) {
if (_errorListener != null) postErrorToListener(e.getMessage()); 738
throw new TransformerException(e);
} finally {
_dtmManager = null; 741
}
// If we create an output stream for the Result, we need to close it after the transformation.
if (_ostream != null) { 745
try {
_ostream.close(); 747
}
catch (IOException e) {}
_ostream = null; 750
}
}
At 729, the getDOM(source) succeeds. Then we jump to:
It is apparently the Iterator that fails after a few calls of hasNext; I can't tell exactly where or why because the intermediate GregorSamsa.transform(DOM, DTMAxisIterator, SerializationHandler) call has a "line not available" in Eclipse's stack display.
package com.sun.org.apache.xalan.internal.xsltc.runtime.AbstractTranslet.transform
try {
transform(document, document.getIterator(), handler); 611
} finally {
_keyIndexes = null;
}0 -
If the Iterator over the XML file fails I assume your XML file is somehow malformed.
Here you have the example code of the new documentation of write weights operator
Best,
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="../../../../documentation2html.xsl"?>
<p1:documents xmlns:p1="http://rapid-i.com/schemas/documentation/reference/1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://rapid-i.com/schemas/documentation/reference/1.0 http://rapid-i.com/schemas/documentation/reference/1.0/documentation.xsd";>
<!-- each operator should have a key that consists of "operator." plus the operator's key. -->
<operator key="operator.write_weights" locale="en" version="5.2.003">
<title>Write Weights</title>
<synopsis>This operator writes the given attribute weights into the specified file.</synopsis>
<text>
<paragraph>The Write Weights operator writes the attribute weights provided at the input port into the specified file. The path of the file is specified through the <em>attribute weights file</em> parameter. Each line in the file holds the name of one attribute and its weight. The <reference key="operator.read_weights">Read Weights</reference> operator can be used to read the weights written by this operator. </paragraph>
<paragraph>The attribute weights specify the relevance of the attributes with respect to the label attribute. There are numerous operators that create attribute weights e.g. the Weight by Correlation, Weight by PCA etc. Operators that generate attribute weights are located at 'Modeling/Attribute Weighting' in the Operators Window.</paragraph>
</text>
<!--
<differentiation>
<relatedDocument key=""></relatedDocument>
</differentiation>
-->
<inputPorts>
<port name="input" type="com.rapidminer.example.AttributeWeights">This input port expects attribute weights. It is the output of the Weight by Correlation operator in the attached Example Process.</port>
</inputPorts>
<outputPorts>
<port name="through" type="com.rapidminer.example.AttributeWeights">The attribute weights that were provided at the input port are delivered through this output port without any modifications.This is usually used to reuse the same attribute weights in further operators of the process.</port>
</outputPorts>
<parameters>
<!-- description of the parameters and the corresponding values -->
<parameter key="attribute_weights_file" type="filename">The path of the file where the attribute weights are to be written is specified here. It can be selected using the <em>choose a file</em> button.</parameter>
<parameter key="encoding" type="selection">This is an expert parameter. There are different options, users can choose any of them</parameter>
</parameters>
<!--
<relatedDocuments>
<relatedDocument key=""></relatedDocument>
</relatedDocuments>
-->
<tutorialProcesses>
<tutorialProcess key="process.write_weights.write_read_weights" title="Writing and then Reading attribute weights from a file">
<description>
<paragraph>This Example Process shows how the Write Weights operator can be used for writing attribute weights and how the Read Weights operator can be used for reading attribute weights from this file . The 'Sonar' data set is loaded using the Retrieve operator. The Weight by Correlation operator is applied on it. A <em>breakpoint</em> is inserted here so that you can have a look at the attribute weights. The resultant weights are provided as input to the Write Weights operator. The <em>attribute weights file</em> parameter is set to 'D:\sonar weights' thus a file named 'sonar weights' is created (if it does not already exist) in the 'D' drive of your computer. You can open the written weights file and make changes in it (if required). The attribute weights file is then read by using the Read Weights operator. The <em>attribute weights file</em> parameter is set to 'D:\sonar weights' to read the same file that was written by the Write Weights operator. The resultant weights are connected to the <em>result</em> port of the process. The attribute weights can be seen in the Results Workspace.</paragraph>
<!-- tutorialProcess description: What is done and shown here? You can use formated text here -->
</description>
<process version="5.2.003">
<context>
<input/>
<output/>
<macros/>
</context>
<operator activated="true" class="process" compatibility="5.2.003" expanded="true" name="Process">
<process expanded="true" height="398" width="625">
<operator activated="true" class="retrieve" compatibility="5.2.003" expanded="true" height="60" name="Sonar" width="90" x="112" y="75">
<parameter key="repository_entry" value="//Samples/data/Sonar"/>
</operator>
<operator activated="true" breakpoints="after" class="weight_by_correlation" compatibility="5.2.003" expanded="true" height="76" name="Weight by Correlation" width="90" x="246" y="75"/>
<operator activated="true" class="write_weights" compatibility="5.2.003" expanded="true" height="60" name="Write Weights" width="90" x="380" y="75">
<parameter key="attribute_weights_file" value="D:\sonar weights.txt"/>
</operator>
<operator activated="true" class="read_weights" compatibility="5.2.003" expanded="true" height="60" name="Read Weights" width="90" x="380" y="210">
<parameter key="attribute_weights_file" value="D:\sonar weights.txt"/>
</operator>
<connect from_op="Sonar" from_port="output" to_op="Weight by Correlation" to_port="example set"/>
<connect from_op="Weight by Correlation" from_port="weights" to_op="Write Weights" to_port="input"/>
<connect from_op="Read Weights" from_port="output" to_port="result 1"/>
<portSpacing port="source_input 1" spacing="0"/>
<portSpacing port="sink_result 1" spacing="180"/>
<portSpacing port="sink_result 2" spacing="90"/>
</process>
</operator>
</process>
</tutorialProcess>
</tutorialProcesses>
</operator>
</p1:documents>
Nils0