The most recent content from our members.
There's a wealth of content waiting for you to explore! Need support or have a question? Sign in or register to get started.
Hello, I would like to ask. How do I change my tokenized words attribute back into my own text attribute in Excel file? I was doing tokenized words for correcting many mistakes in my text by using Stem (Dictionary) and many other operators within Process Documents from Data. The thing is that I can't find any operator that…
The format of the exampleset is attached. Output .txt file should have it in the format as follows: "S12345","FP" "S23456","FP" "S34567","FP" "S45678","FP"
Hello. I have problem regarding the dummy operator of process documents from data. The error as in the picture. I am using RapidMiner AI Hub 9.9. I have installed the extension and put it in the server, however it is still like this.
Hi there RapidMiner Community, I'm currently trying to load JSON data into RapidMiner to use it for a sentiment analysis I'm working on. Browsing threw a lot of the forum content on here regarding the work with JSON data and trying out a lot of the solutions, I sadly haven't found any that worked for me. The data should…
I have an excel file with ID and the text which want to cluster. How do I preserve the ID in the excel output with cluster results?
Hello everyone! I'm trying to use the operator Generate Gaussian in order to plot the frequency of words, but comparing my results (calculated manually) with them they're really different. I need this operation to understand which values to discard through the pruning. What's the formula that RapidMiner uses to create…
Hello Rapidminers, I want to write a textfile from the exampleset data for each row. Ex: Requisition Title column data will become filename and Overview will become contents of the file. Have around 300 rows of data, so 300 files with title should be coming from ‘Requisition Title’ and contents of textfile should be from…
I am using an excel file with 9 columns with 4000 rows of reviews associated with 4 products. I can't get going on this and have gone over tutorials, but the first step is not clear. To use the tokenize, filter stop words, I need a doc.. but how do I get excel file to a doc?
Exceptional individuals who support the Community.
The Community Team would like to introduce the April Champion of the Month, Lasse Kaikko @Lasse. Lasse works as a Solution Manager in a team where one of the team’s key responsibilities is to automate processes of transforming unstructured or semi-structured data into a structured format. He has been working with data…
Configuring GPU for PhysicsAI model training Hello PhysicsAI users, As you are already aware that the GPU can be leveraged for model training in PhysicsAI, however, there are a few important points which should be considered to make sure that the GPU gets utilized by PhysicsAI for the same. First consideration : required…
Altair will host our first global ATCx AEC, a one-day virtual event dedicated to transforming the architecture, engineering, and construction (AEC) sector. This dynamic event will bring together world renowned thought leaders and industry experts to explore the trends, technologies, and opportunities shaping the future of…
The Altair Support page provides users with access to the Altair Learning Center page, which can be reached either directly through or by logging into the: Altair One or Altair Community Users in Altair Learning Center have access to four different sections: In the Explore Content section, users can refine their search…
Exciting Update! We added 6 more sessions to our ongoing Community Magnetic Chat Series. This monthly series is designed to empower industry experts by providing them with insights into the latest features, workflows, and advancements in Altair’s Electromagnetics solutions. Throughout the series attendees will have the…
Hello Community Members! Welcome to the April Community Roundup! We're thrilled to bring you the monthly collection of activities in our community. Your feedback matters to us - We'd love to hear your thoughts on these roundups and learn what content you find most valuable. What would you like to see covered in future…