Using colourvision

library(colourvision)




Introduction

This is a practical user guide to colourvision. Colour vision models allow appraisals of colour patches independently of the human vision. More detailed explanation on colour vision models is provided elsewhere [@Kelber:2003p1255; @ENDLER:2005p538; @Osorio:2008ep; @Kemp:2015jf; @Renoult:2017dv; @Gawryszewski:gt].

Package colourvision has functions to the three most commonly used models by ecologists, behavioural ecologists, and evolutionary biologists [@Chittka:1992p186; @Vorobyev:1998p1813; @Vorobyev:1998p335; @ENDLER:2005p538], and a generic function to build alternative models based on the same set of general assumptions. These models have been extended to accept any number of photoreceptor types, i.e., the same model may be applied to a dichromatic mammal and a tentatively pentachromatic dipterean. Modelling functions provide a comprehensive output, which may be visualised into publication-ready colour space plots.



Data handling

This section shows functions applied to colour data before model calculations.

spec.denoise()

Applies a smooth.spline for a data frame containing spectrometric data. Useful when raw data from spectrophotometer output are noisy.

spider<-read.table("615ad.ProcSpec.transmission", nrows=3000, skip=100)
spider.smooth<-spec.denoise(spider)
par(mfrow=c(1,2), mar=c(5, 4, 4, 2) + 0.1)
plot(spider, ylim=c(0,50), xlim=c(300,700), type="l",
     ylab="Reflectance (%)", xlab="Wavelength (nm)",
     main="before")
plot(spider.smooth, ylim=c(0,50), xlim=c(300,700), type="l",
     ylab="Reflectance (%)", xlab="Wavelength (nm)", main="after")



photor()

Photoreceptor sensitivity curves are seldom available, but they can be estimated using the wavelength of maximum sensitivity [$\lambda_{max}$; @Govardovskii:2000tf]. photor generates photoreceptor sensitivity spectra based on $\lambda_{max}$ values:

human<-photor(lambda.max = c(420,530,560), lambda = seq(400, 700, 1))
plot(human[,2]~human[,1], type="l", ylim=c(0,1),
     col="blue", lwd=2,
     ann = FALSE,
     mgp = c(3, 0.7, 0))
mtext(side = 2, text = "Sensitivity", line = 2.5)
mtext(side = 1, text = "Wavelength(nm)", line = 2.5)
lines(human[,3]~human[,1], col="green", lwd=2)
lines(human[,4]~human[,1], col="red", lwd=2)



logistic()

Creates a sigmoid reflectance curve. Useful for simulations using colour vision models [see for instance @Gawryszewski:gt].

R<-logistic(x0=500, L=80, k=0.04)
plot(R, ylim=c(0,100),xlim=c(300,700), xlab="Wavelength (nm)", ylab="Reflectance (%)", type="l")



energytoflux()

Photoreceptors respond to photon numbers, not photon energy [@Endler:1990p1253]. energytoflux converts Irradiance datum from energy units ($uW \times cm^{-2} \times nm^{-1}$) to quantum flux units ($umol \times m^{-2} \times s^{-1}$).

D65energy<-read.csv("D65energy.csv")
D65photon<-energytoflux(D65energy)
par(mfrow=c(1,2),mar=c(5, 5, 4, 2) + 0.1)
plot(D65energy, xlim=c(300,700), type="l",
     ylab=expression("Spectral energy"~(uW~cm^-2*nm^-1)), xlab="Wavelength (nm)",
     main="CIE D65")
plot(D65photon, xlim=c(300,700), type="l",
     ylab=expression("Photon flux"~(umol~m^-2*s^-1)), xlab="Wavelength (nm)", main="CIE D65")



Colour Vision Models

The Basics

This section serves as a brief introduction to colour vision models, and to introduce internal package functions used to calculate colour vision models. For model specific functions (CTTKmodel, EMmodel, RNLmodel, RNLthres, and GENmodel), please refer to further sections in this manual.

Colour vision models require a minimum of four parameters for calculations: (1) photoreceptor sensitivity curves, (2) background reflectance spectrum, (3) illuminant spectrum, and (4) the stimulus reflectance spectrum (stimulus). Receptor noise limited models also require photoreceptor noise for each photoreceptor type.

Firstly, one needs to estimate photon catches of each photoreceptor type in the retina, which is a function of the stimulus reflectance, photoreceptor sensitivity, and the illuminant spectrum:

$$Q_i = \int_{300}^{700} R(\lambda)I(\lambda)C_i(\lambda) d\lambda$$ where $R(\lambda)$ denote the stimulus reflectance, $I(\lambda)$ the illuminant spectrum, and $C(\lambda)$ the photoreceptor sensitivity curves. Note that the illuminant spectrum has to be in quantum flux units, not energy units, because photoreceptors respond to photon numbers, not photon energy.

In colourvision quantum catches are represented by function Q. Here, quantum catches of a given stimulus (R) are estimated by each photoreceptor type found in Apis mellifera [bee; @Peitsch:1992p214] under the CIE D65 standard illuminant (D65):

R<-logistic(x=seq(300,700,1), x0=500, L=80, k=0.04)
data("bee")
data("D65")

Qcatch1<-Q(R=R,I=D65,C=bee[c(1,2)],interpolate=TRUE,nm=seq(300,700,1))
Qcatch2<-Q(R=R,I=D65,C=bee[c(1,3)],interpolate=TRUE,nm=seq(300,700,1))
Qcatch3<-Q(R=R,I=D65,C=bee[c(1,4)],interpolate=TRUE,nm=seq(300,700,1))

In general, colour vision models assume that photoreceptors are adapted to the background. This is achieved by calculating quantum catch of each photoreceptor type in relation to the quantum catches based on the background reflectance (also known as the von Kries transformation). $$Qb_i = \int_{300}^{700} Rb(\lambda)I(\lambda)C_i(\lambda) d\lambda$$

$$q_i = \frac{Q_i}{Qb_i}$$

where $Rb$ is the background reflectance, $Q_i$ is the quantum catch from the stimulus reflectance, and $Qb_i$ is the quantum catch from the background reflectance.

Relative quantum catches are calculated using function Qr. Here photoreceptors are assumed to be adapted to a reflectance background based on samples collected in the Brazilian savanna [@Gawryszewski:2012jv]:

data("Rb")

Qr1<-Qr(R=R,Rb=Rb, I=D65,C=bee[c(1,2)],interpolate=TRUE,nm=seq(300,700,1))
Qr2<-Qr(R=R,Rb=Rb, I=D65,C=bee[c(1,3)],interpolate=TRUE,nm=seq(300,700,1))
Qr3<-Qr(R=R,Rb=Rb, I=D65,C=bee[c(1,4)],interpolate=TRUE,nm=seq(300,700,1))

The relationship between photoreceptor input and output are assumed to be non-linear. Each colour vision model uses a different transformation function (e.g., log, x/(x+1)), but with the same general result:

X<-seq(1,10,by=0.1)
logY<-log(X)
hyperY<-X/(X+1)
#par(cex.axis=0.9, cex.lab=0.9)
plot(logY~X, xlab="q", ylab="E", type="l", lwd=2, ylim=c(0,max(logY)))
lines(hyperY~X,col="red", lwd=2)
legend(x=1,y=max(logY),
       legend=c("log(q)","q/(q+1)"),
       bty="n", col=c("black","red"),
       lty=1, xjust=0, cex=0.9)



For instance, @Chittka:1992p186 assumes an asymptotic curve with limit = 1: $$E_i = \frac{q_i}{q_i+1}$$

Photoreceptor outputs are then projected as equidistant vectors into a colour space. Again, each model will present differently arranged vectors, however, this arrangement is arbitrary and has no effect on model predictions. The length of the resultant vector represents the chromaticity distance of the stimulus in relation to the background, and vector coordinates represent the stimulus locus in the animal colour space (colour locus).

For instance, colour_space generates a general colour space based on any number of photoreceptor types, and calculates colour locus coordinates for a given photoreceptor output:

colour_space(n=3, type="length", length=1, edge=NA, q=c(Qr1,Qr2,Qr3))

Chittka (1992) Colour Hexagon

@Chittka:1992p186 developed a colour vision model based on trichromatic hymenopteran vision. This model has later been extended to tetrachromatic avian vision [@Thery:2002p163]. In colourvision this model has been further extended to accept any number of photoreceptor types ($n\geq2$).

Photoreceptor outputs ($E_i$) are calculated by: $${E_i = \frac{q_i}{q_i+1}}$$

Then, for trichromatic vision, coordinates in the colour space are found by [@Chittka:1992p186]: $${X_1 = \frac{\sqrt{3}}{2}(E_3-E_1)}$$ $${X_2 = E_2-\frac{1}{2}(E_1+E_3)}$$ For tetrachromatic vision [@Thery:2002p163]: $${X_1 = \frac{\sqrt{3}\sqrt{2}}{3}(E_3-E_4)}$$ $${X_2 = E_1-\frac{1}{3}(E_2+E_3+E_4)}$$ $${X_3 = \frac{2\sqrt{2}}{3}(\frac{1}{2}(E_3+E_4)-E_2)}$$ Then, for a pentachromatic animal [@Gawryszewski:gt]: $${X_1 = \frac{5}{2\sqrt{2}\sqrt{5}}(E_2-E_1)}$$ $${X_2 = \frac{5\sqrt{2}}{2\sqrt{3}\sqrt{5}}(E_3-\frac{E_1+E_2}{2})}$$ $${X_3 = \frac{5\sqrt{3}}{4\sqrt{5}}(E_4-\frac{E_1+E_2+E_3}{3})}$$ $${X_4 = E_5-\frac{E1+E2+E3+E4}{4}}$$

CTTKmodel()

@Chittka:1992p186 model is represented by function CTTKmodel. This functions needs (1) photoreceptor sensitivities curves, (2) background reflectance spectrum, (3) illuminant spectrum, and (4) stimulus reflectance spectra:

A worked example:

  1. Load data files: Apis mellifera photoreceptor sensitivity curves [@Peitsch:1992p214], Background reflectance and the illuminant spectrum:
data("bee")
data("Rb")
data("D65")
par(mfrow=c(1,3))
plot(bee[,2]~bee[,1], type="l", ylim=c(0,100),
     #ylab="Absorbance(%)",
     #xlab="Wavelength(nm)",
     col="violet", lwd=2, cex.axis=1, cex.lab=1,
     ann = FALSE,
     mgp = c(3, 0.7, 0))
mtext(side = 2, text = "Sensitivity(%)", line = 2.5, cex=1)
mtext(side = 1, text = "Wavelength(nm)", line = 2.5, cex=1)

lines(bee[,3]~bee[,1], col="blue", lwd=2)
lines(bee[,4]~bee[,1], col="green", lwd=2)
text("a)", x=305,y=98, cex=1.5)

plot(Rb[,2]~Rb[,1], type="l", ylim=c(0,100),
#     ylab="Reflectance(%)",
#     xlab="Wavelength(nm)",
     lwd=2, cex.axis=1, cex.lab=1,
     ann = FALSE,
     mgp = c(3, 0.7, 0))
mtext(side = 2, text = "Reflectance(%)", line = 2.5, cex=1)
mtext(side = 1, text = "Wavelength(nm)", line = 2.5, cex=1)
text("b)", x=305,y=98, cex=1.5)


plot(D65[,2]~D65[,1], type="l", ylim=c(0,5), xlim=c(300,700),
     #ylab="Photon flux (??mol/m2/s)",
     #xlab="Wavelength(nm)",
     lwd=2, cex.axis=1, cex.lab=1,
     ann = FALSE,
     mgp = c(3, 0.7, 0))
mtext(side = 2, text = expression("Photon flux"~(umol~m^-2*s^-1)), line = 2.5, cex=1)
mtext(side = 1, text = "Wavelength(nm)", line = 2.5, cex=1)
text("c)", x=305,y=4.9, cex=1.5)



2. Create simulated reflectance data:

midpoint<-seq(from = 500, to = 600, 10)
W<-seq(300, 700, 1)
R<-data.frame(W)
for (i in 1:length(midpoint)) {
  R[,i+1]<-logistic(x = seq(300, 700, 1), x0=midpoint[[i]], L = 70, k=0.04)[,2]+5
}
names(R)[2:ncol(R)]<-midpoint
plot(R[,2]~R[,1], ylim=c(0,100),xlim=c(300,700), xlab="Wavelength (nm)", ylab="Reflectance (%)", type="n", cex=0.9)
colours<-rainbow(n=(ncol(R)-1))
for(i in 2:ncol(R)) {
  lines(R[,i]~R[,1],col=colours[[ncol(R)-i+1]])
}



3. Run @Chittka:1992p186 model:

CTTKmodel3<-CTTKmodel(R=R, I=D65, C=bee, Rb=Rb)

Model output provides the relative quantum catches (Qr), photoreceptor outputs (E), colour locus coordinates (X), and the chromaticity distance of stimulus in relation to the background (deltaS).

CTTKmodel3



knitr::kable(head(CTTKmodel3), caption = "Table 1. `CTTKmodel()` output of a trichromatic animal, showing the relative quantum catches (Qr), photoreceptor outputs (E), colour locus coordinates (X), and the chromaticity distance of stimulus in relation to the background (deltaS)")

Endler & Mielke (2005)

The original model is available for tetrachromatic animals only. In colourvision, the model was extended to any number of photoreceptors [@Gawryszewski:gt; see also @Pike:2012tv].

First, relative quantum catches are log-transformed:

$$f_i = \ln(q_i)$$

where $q_i$ is the relative quantum catch of each photoreceptor type. The model uses only relative values, so that photoreceptor outputs ($E$) are given by:

$$E_i = \frac{f_i}{f_1+f_2+f_3+...+f_n}$$

Then, for tetrachromatic vision colour locus coordinates are found by [@ENDLER:2005p538]:

$$X_1 = \sqrt{\frac{3}{2}}(1-2\frac{E2-E3-E1}{2})$$ $${X_2 = \frac{-1+3E_3+E_1}{2\sqrt{2}}}$$ $${X_3 = E_1-\frac{1}{4}}$$

Tetrachromatic chromaticity diagram (tetrahedron) in @ENDLER:2005p538 has a maximum photoreceptor vector of length = 0.75, which gives a tetrahedron with edge length = $\sqrt{\frac{3}{2}}$. The chromaticity coordinates for other colour spaces may preserve either the same vector length or the same edge length.

For instance, for dichromatic vision, coordinate (X1) in the colour space preserving the same vector length is found by:

$${X_1 = \frac{3}{4}(E_2-E_1)}$$

whereas if the edge length is preserved, $X_1$ is found by:

$$\frac{1}{2}\sqrt{\frac{3}{2}}(E_2-E_1)$$

EMmodel()

Using the same data as in CTTKmodel() example:

EMmodel3<-EMmodel(type="length",R=R,I=D65,Rb=Rb,C=bee)

Model output provides the relative quantum catches (Qr), photoreceptor outputs (E), colour locus coordinates (X), and the chromaticity distance of stimulus in relation to the background (deltaS).

knitr::kable(head(EMmodel3), caption = "Table 2. `EMmodel()` output of a trichromatic animal, showing the relative quantum catches (Qr), photoreceptor outputs (E), colour locus coordinates (X), and the chromaticity distance of stimulus in relation to the background (deltaS)")

Receptor Noise Limited Models (Vorobyev & Osorio 1998; Vorobyev et al. 1998)

Receptor Noise Limited Model assumes that chromatic discrimination is limited by noise at the photoreceptors [@Vorobyev:1998p1813; @Vorobyev:1998p335]. Model calculation follows similar steps as in @Chittka:1992p186 and @ENDLER:2005p538, but has an additional step, namely, calculation of noise at the resultant vector, based on noise of each photoreceptor type.

Photoreceptor noise is seldom measured directly. In lack of direct measurement, receptor noise (${e_i}$) can be estimated by the relative abundance of photoreceptor types in the retina, and a measurement of a single photoreceptor noise-to-signal ratio [@Vorobyev:1998p1813; @Vorobyev:1998p335]: $${e_i=\frac{\nu}{\sqrt{\eta _i}}}$$ where ${\nu}$ is the noise-to-signal ratio of a single photoreceptor, and ${\eta}$ is the relative abundance of photoreceptor $i$ in the retina.

@Vorobyev:1998p1813 aimed to predict colour thresholds. Close to the threshold, the relationship between photoreceptor input and output is nearly linear, so that $E_i=q_i$. However, for comparison between two colours that are not perceptually near the threshold, one must use a non-linear relationship between photoreceptor input and output [@Vorobyev:1998p335]: $E_i=ln(q_i)$.

Then, $\Delta$S is calculated by [eq. A7 in @Vorobyev:1998p335]:

$$(\Delta{S})^2 = V \Delta\vec{p} \bullet (V R V^T)^{-1} V \Delta\vec{p}$$ where $V$ is a matrix of column vectors, $\bullet$ denotes the inner product, $T$ denotes the transpose, $\Delta\vec{p}$ is a vector which components represent differences between $E$-values, and $R$ is a covariance matrix of photoreceptor values. Photoreceptors values are not correlated, therefore $R$ is a diagonal matrix in which diagonal elements are the photoreceptor variances ($e_i^2$):

$$R = \left[\array{ e_1^2 & 0 & 0 &\ 0 & e_2^2 & 0 &\cdots &\ 0 & 0 & e_3^2&\ & \vdots & & \ddots &}\right]$$

The receptor noise limited model was originally developed to calculate $\Delta$S between two reflectance curves directly, without finding colour locus coordinates [see eqs 3-5 in @Vorobyev:1998p1813]. Nonetheless, for visualisation purposes it is useful to project colour vision model results into chromaticity diagrams. This can be done by [@Gawryszewski:gt; see also @HempeldeIbarra:2001tf; and @Renoult:2017dv for alternative formulae]:

$$\vec{s} = \sqrt{(V R V^T)^{-1}} V \vec{p}$$

where $\vec{s}$ is a vector which components represent stimulus colour locus coordinates, $\vec{p}$ is a vector which components represent stimulus $E$-values, and other elements are the same as in the preceding formula.

RNLmodel()

Using the same data as in CTTKmodel() example:

RNLmodel3<-RNLmodel(model="log",photo=3,R1=R,Rb=Rb,I=D65,C=bee,
                    noise=TRUE,e=c(0.13,0.06,0.11))

Model above calculates $\Delta$S values based on noise measured at Apis mellifera photoreceptors (e=c(0.13,0.06,0.11)). Alternatively, noise might have been calculated based on photoreceptor relative abundances:

RNLmodel3.1<-RNLmodel(model="log", photo=3, R1=R, Rb=Rb, I=D65, C=bee,
                    noise=FALSE, n=c(1,2,1), v=0.1)

Furthermore, users might add another reflectance stimulus (R2) to be compared against the first stimulus (R1):

R2<-logistic(x = seq(300, 700, 1), x0=512, L = 70, k=0.01)
RNLmodel3.2<-RNLmodel(model="log", photo=3, R1=R, R2=R2, Rb=Rb, I=D65, C=bee, noise=FALSE, n=c(1,2,1), v=0.1)

Model output provides photoreceptor noise (e), the relative quantum catches (Qr), photoreceptor outputs (E), colour locus coordinates (X), and the chromaticity distance (deltaS) of the first stimulus in relation to the second stimulus (against the background when R1=Rb).

head(RNLmodel3.2)

RNLthres()

@Vorobyev:1998p1813 aimed to predict discrimination threshold of monochromatic stimuli. By definition, thresholds are found when $\Delta{S} = 1$, therefore [@Vorobyev:1998p1813]:

$$(1)^2 = V \Delta\vec{p} \bullet (V R V^T)^{-1} V \Delta\vec{p}$$ RNLthres() calculates thresholds of monochromatic light for a given background, illuminant, photoreceptor sensitivities and photoreceptor noise.

Worked example:

  1. Bee photoreceptors normalized 1.
C<-bee
C[,2]<-C[,2]/max(C[,2])
C[,3]<-C[,3]/max(C[,3])
C[,4]<-C[,4]/max(C[,4])
  1. Grey background:
Rb.grey <- data.frame(300:700, rep(0.1, length(300:700)))
  1. Model calculation:
thres<-RNLthres(photo=3, Rb=Rb, I=D65, C=C,
       noise=TRUE, e = c(0.13, 0.06, 0.12))

The output is a data.frame with threshold values (T) and log of the sensitivity values (S), per wavelength (nm). Sensitivity is simply the inverse of threshold ($S = \frac{1}{T}$)

knitr::kable(head(thres), caption = "Table 3. `RNLthres()` output, showing wavelength (`nm`), threshold values (`T`) and log of the sensitivity values (`S`).")

Generic model

A generic function (GENmodel) is provided that allows calculation of alternative models based on the same assumptions of other models. Note, however, that colour locus coordinates may differ because positions of vectors used in GENmodel are not necessarily the same as in each model specific formula. Also, caution should be taken because models generated by GENmodel are not supported by experimental data.

GENmodel()

Worked examples:

In this case, GENmodel is applying the same transformation (func=function(x){x/(1+x)}), and the colour space has the same maximum vector length (length=1) as in CTTKmodel:

CTTKmodel3<-CTTKmodel(photo=3,R=R,Rb=Rb,I=D65,C=bee)

ANY.CTTKmodel3any<-GENmodel(photo=3, type="length", length=1,R=R,Rb=Rb,I=D65,C=bee, vonKries = TRUE, func=function(x){x/(1+x)}, unity=FALSE,recep.noise=FALSE)

Note, however, that although Qr, E and deltaS values are exactly the same, colour locus coordinates (X) differ between models (Tables 1 and Table 4). This happens because GENmodel uses a different arrangement of vectors than @Chittka:1992p186. The arrangement of photoreceptors output vectors is arbitrary, has no biological meaning, and no effect on model predictions.

ANY.CTTKmodel3any

knitr::kable(head(ANY.CTTKmodel3any), caption = "Table 4. `GENmodel( )` output, with model parameters based on the same set of assumptions as in Chittka (1992)")



q<-c(0.8,0.8,0.1)
x<-(sqrt(3)/2)*(q[[3]]-q[[1]])
y<-q[[2]]-0.5*(q[[1]]+q[[3]])

par(mfrow=c(1,2), cex=0.8)
plot(y~x, ylim=c(-1.3,1.3), xlim=c(-1.3,1.3), asp=1, xlab="X1",ylab="X2",
     main="CTTKmodel")
arrows(x0=0,y0=0,x1=0,y1=1, length = 0.10)
arrows(x0=0,y0=0,x1=-0.86660254,y1=-0.5, length = 0.10)
arrows(x0=0,y0=0,x1=0.86660254,y1=-0.5, length = 0.10)
graphics::text(x=0,y=1,labels="E1",pos=3)
graphics::text(x=-0.86660254,y=-0.5,labels="E2", pos=1)
graphics::text(x=0.86660254,y=-0.5, labels="E3", pos=4)

cp<-colour_space(n=3, type="length", length=1, q=q)
plot(cp$coordinates[[2]]~cp$coordinates[[1]],
     ylim=c(-1.3,1.3), xlim=c(-1.3,1.3), asp=1, xlab="X1",ylab="X2",
     main="GENmodel")
v1<-cp$vector_matrix[,1]
v2<-cp$vector_matrix[,2]
v3<-cp$vector_matrix[,3]
arrows(x0=0,y0=0,x1=v1[[1]],y1=v1[[2]], length = 0.10)
arrows(x0=0,y0=0,x1=v2[[1]],y1=v2[[2]], length = 0.10)
arrows(x0=0,y0=0,x1=v3[[1]],y1=v3[[2]], length = 0.10)

text(x=v1[[1]],y=v1[[2]],labels="E1",pos=1)
text(x=v2[[1]],y=v2[[2]],labels="E2",pos=1)
text(x=v3[[1]],y=v3[[2]],labels="E3",pos=3)



Alternatively, users may choose to change some aspect of models. In the example below, a model based on receptor noise is calculated, but with a different photoreceptor input-output transformation:

RNLmodel3<-RNLmodel(model="log",photo=3,R1=R,Rb=Rb,I=D65,C=bee,
                    noise=TRUE,e=c(0.13,0.06,0.11))

ANY.RNLmodel3any<-GENmodel(photo=3, type="length", length=1,R=R,Rb=Rb,I=D65,C=bee, vonKries = TRUE, func=function(x){x/(1+x)}, unity=FALSE,recep.noise=TRUE, noise.given=TRUE,e=c(0.13,0.06,0.11))

Note that in this case several parameters differ between models:

head(RNLmodel3[,c("E1_R1","E2_R1","E3_R1","X1_R1","X2_R1","deltaS")])

knitr::kable(head(RNLmodel3[,c("E1_R1","E2_R1","E3_R1","X1_R1","X2_R1","deltaS")]), caption = "Table 5. `RNLmodel( )` results, showing photoreceptor outputs (Ei_R1), colour locus coordinates (Xi_R1), and the chromaticity distance to the background (deltaS).")

head(ANY.RNLmodel3any[,c("E1","E2","E3","X1","X2","deltaS")])

knitr::kable(head(ANY.RNLmodel3any[,c("E1","E2","E3","X1","X2","deltaS")]), caption = "Table 6. `GENmodel( )` based on receptor noise but with a different transformation between photoreceptor input and output. Model results showing photoreceptor outputs (Ei_R1), colour locus coordinates (Xi_R1), and the chromaticity distance to the background (deltaS).")

Chromaticity distances ($\Delta$S)

Chromaticity distances ($\Delta$S) are the Euclidean distances between points in the animal colour space. It is frequently assumed that there is a positive and linear relationship between $\Delta$S values and the probability of discrimination between two colours [although this is not necessarily the case, see @Garcia:2017hl].

In CTTKmodel, EMmodel, and GENmodel model outputs, deltaS values represent the distance between the stimulus and the background. In RNLmodel output, deltaS represents the distance between R1 and R2, or between R1 and the background when R1=Rb.

However, one may want to compute all pairwise chromaticity distance between all stimuli. This is done by the deltaS function.

deltaS

deltaS function will calculate a matrix with all possible pairwise comparison between stimulus reflectance spectra.

dS<-deltaS(CTTKmodel3)
dS

It may be useful to visualise deltaS output graphically using the 'corrplot' package:

library(corrplot)
corrplot(corr=dS, is.corr=FALSE)



Plotting models

Models outputs can be easily plotted using the plot function, for dichromatic and trichromatic animals, or plot3d.colourvision (requires package rgl) for tetrachromatic animals. In addition, radarplot plots Qr and E values into a radar plot. For threshold data, plot(model) plots sensitivity values (ln-transformed) per wavelength.

plot(model)

For dichromatic and trichromatic animals. Plots model specific chromaticity diagrams. For instance:

@Chittka:1992p128 Hexagon for trichromatic animals:

par(mar=c(1,1,1,1))
colours<-rainbow(n=(ncol(R)-1))
plot(CTTKmodel3, cex=0.5, col=colours)



@ENDLER:2005p538 adapted to trichromatic animals:

par(mar=c(1,1,1,1))
plot(EMmodel3, cex=0.8, col=colours)



Colour spaces of receptor noise limited models [@Vorobyev:1998p1813; @Vorobyev:1998p335] do not have boundaries. Therefore, it is useful to plot results alongside vectors representing $E$-values:

par(mar=c(5, 4, 4, 2) + 0.1)
plot(RNLmodel3, cex=0.8, pch=16, col=colours)



plot(model) for threshold values

Colour thresholds based on receptor noise limited models [@Vorobyev:1998p1813]:

plot(thres)



plot3d.colourvision(model)

For tetrachromatic animals, plot3d.colourvision plots model specific 3D colour spaces. @Chittka:1992p186 model is represented by a hexagonal trapezohedron:

library(rgl)
CTTKmodel4<-CTTKmodel(R=R, I=D65, C=photor(c(350,420,490,560),beta.band=FALSE), Rb=Rb)
library(rgl)
par3d(windowRect=c(100,100,800,800))
library(rgl)
plot3d.colourvision(CTTKmodel4, s.col = colours, size=4)
rgl.viewpoint(theta = 30, phi = 10, zoom=0.8)
rgl.snapshot("CTTKmodel4.png")
rgl.close()
knitr::include_graphics('./CTTKmodel4.png', auto_pdf = FALSE)



EMmodel4<-EMmodel(R=R, I=D65, C=photor(c(350,420,490,560),beta.band=FALSE), Rb=Rb)
library(rgl)
par3d(windowRect=c(100,100,800,800))
plot3d.colourvision(EMmodel4, s.col = colours, size=4)
library(rgl)
rgl.viewpoint(theta = 0, phi = 30, zoom=0.8)
rgl.snapshot("EMmodel4.png")
rgl.close()
knitr::include_graphics('./EMmodel4.png', auto_pdf = FALSE)



RNLmodel4<-RNLmodel(model="log", R1=R, I=D65, C=photor(c(350,420,490,560),beta.band=FALSE), Rb=Rb, noise=TRUE, e=c(0.1,0.07,0.07,0.05))
library(rgl)
par3d(windowRect=c(100,100,800,800))
plot3d.colourvision(RNLmodel4, xlab="", ylab="", zlab="", col = colours, size=4)
rgl.viewpoint(theta = 30, phi = 10, zoom=1)
rgl.snapshot("RNLmodel4.png")
rgl.close()
knitr::include_graphics('./RNLmodel4.png', auto_pdf = FALSE)



radarplot(model)

radarplot plots quantum catches or $E$-values into a radar plot.

CTTKmodel5<-CTTKmodel(R=R, I=D65, C=photor(c(350,410,470,530,590),beta.band=FALSE), Rb=Rb)
radarplot(CTTKmodel5, item="Qr", item.labels=TRUE, border=colours)



radarplot(CTTKmodel5, item="E", item.labels=TRUE, ann=FALSE, xaxt = "n", yaxt = "n", ylim=c(-1.2,1.2), xlim=c(-1.2,1.2), border=colours)



References



Try the colourvision package in your browser

Any scripts or data that you put into this service are public.

colourvision documentation built on Aug. 2, 2021, 1:06 a.m.