Hello,
I am trying to process a collection of text documents in the following manner:
1. For each document d compute a term frequency vector fv;
2. transform term frequencies in fv into probabilities and obtain a term probability vector pv
3. Use a precomputed language model lm to compute Kullback-Leibler divergence between fv and lm.
I was successful with completing step 1

Step 2 is less clear - how do I get the count of all terms in fv? Step 3 is totally unclear. My language model has a vertical layout, i.e. entries are roughly in form [term][probability of occurence]. The probability vector in turn has a horizontal layout - (p_of_term_1, ... p_of_term_n).
Any suggestions?