Description Usage Arguments Value Parallel Processing Examples

Calculates the log-likelihood of a point process. Provides methods for the generic function `logLik`

.

1 2 3 4 |

`object` |
an object with class |

`SNOWcluster` |
an object of class |

`...` |
other arguments. |

Value of the log-likelihood.

Parallel processing can be enabled to calculate the term *SUM_i log lambda_g(ti|H_ti)*. Generally, the amount of computational work involved in calculating *lambda_g(t|H_t)* is much greater if there are more events in the process history prior to *t* than in the case where there are fewer events. Given *m* nodes, the required evaluation points are divided into *m* groups, taking into account the amount of “history” prior to each event and the CPU speed of the node (see below).

We have assumed that communication between nodes is fairly slow, and hence it is best to allocate the work in large chunks and minimise communication. If the dataset is small, then the time taken to allocate the work to the various nodes may in fact take more time than simply using one processor to perform all of the calculations.

The required steps in initiating parallel processing are as follows.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 | ```
# load the "parallel" package
library(parallel)
# define the SNOW cluster object, e.g. a SOCK cluster
# where each node has the same R installation.
cl <- makeSOCKcluster(c("localhost", "horoeka.localdomain",
"horoeka.localdomain", "localhost"))
# A more general setup: Totara is Fedora, Rimu is Debian:
# Use 2 processors on Totara, 1 on Rimu:
totara <- list(host="localhost",
rscript="/usr/lib/R/bin/Rscript",
snowlib="/usr/lib/R/library")
rimu <- list(host="rimu.localdomain",
rscript="/usr/lib/R/bin/Rscript",
snowlib="/usr/local/lib/R/site-library")
cl <- makeCluster(list(totara, totara, rimu), type="SOCK")
# NOTE: THE STATEMENTS ABOVE WERE APPROPRIATE FOR THE snow PACKAGE.
# I HAVE NOT YET TESTED THEM USING THE parallel PACKAGE.
# Relative CPU speeds of the nodes can be added as an attribute
# Say rimu runs at half the speed of totara
# (default assumes all run at same speed)
attr(cl, "cpu.spd") <- c(1, 1, 0.5)
# then define the required model object, e.g. see topic "mpp"
# say the model object is called x
# then calculate the log-likelihood as
print(logLik(x, SNOWcluster=cl))
# stop the R jobs on the slave machines
stopCluster(cl)
``` |

Note that the communication method does not need to be `SOCKS`

; see the parallel package documentation, topic `makeCluster`

, for other options. Further, if some nodes are on other machines, the firewalls may need to be tweaked. The master machine initiates the **R** jobs on the slave machines by communicating through port 22 (use of security keys are needed rather than passwords), and subsequent communications use random ports. This port can be fixed, see `makeCluster`

.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 | ```
# SRM: magnitude iid exponential with bvalue=1
TT <- c(0, 1000)
bvalue <- 1
params <- c(-2.5, 0.01, 0.8, bvalue*log(10))
# calculate log-likelihood excluding the mark density term
x1 <- mpp(data=NULL,
gif=srm_gif,
marks=list(NULL, rexp_mark),
params=params,
gmap=expression(params[1:3]),
mmap=expression(params[4]),
TT=TT)
x1 <- simulate(x1, seed=5)
print(logLik(x1))
# calculate log-likelihood including the mark density term
x2 <- mpp(data=x1$data,
gif=srm_gif,
marks=list(dexp_mark, rexp_mark),
params=params,
gmap=expression(params[1:3]),
mmap=expression(params[4]),
TT=TT)
print(logLik(x2))
# contribution from magnitude marks
print(sum(dexp(x1$data$magnitude, rate=bvalue*log(10), log=TRUE)))
``` |

Embedding an R snippet on your website

Add the following code to your website.

For more information on customizing the embed code, read Embedding Snippets.