Table of contents
To do and done lists
Email to Ander
R code
See also other codes and implementations developed by F&PIG researchers.
TO DO
confirm computational results with Ander (especially, mb, dw, sigma, abs) and Dominik (especially moments)
sigma convolution
non-monotone unary transformations (abs, sin, cos)
pls, bel, prob=poss=alpha=mass, cut=ci=pred?
create union structure that unites pbox, interval, and consonant objects into the un object (are there OO conventions...poly, inheritance?)
cross strengthening of moments and the edges (u, d, and mb)
cross strengthening of cuts
create function to forget consonant part of the unified number
establish the default consonant structure (Baudrit's conception might fail to exist, but it looks like Hose's conception always exists if the p-box does)
devise display conventions for unified numbers
develop the mixture operation (is it just the same as the constructure for DSSs?)
resolve minor bugs and weirdnesses: VGAM spontaneous loading
DONE
develop intersection and envelope functions
unbounded structures don't plot
unary transformations by consonantifying (folding) the original-flavor c-boxes in https://sites.google.com/site/confidenceboxes/software
various constructors, and property functions (core, range=support=base?, moments, cuts, leftside, as.character, etc.
develop plotting routine for consonant structures, both staircasing and connect-the-dots options
define the basic data structure as "mb", the inverse trace, i.e., the quantiles of the trace function which is the function of x-values that jumps up at the sorted left endpoints of the DSS focal elements and jumps down at the reverse-sorted right endpoints, in the same array
Scott writes to Ander:
I want to implement a working calculus for the "union" numbers in R. It seems to me that this union requires four kinds of things:
1) implementing arithmetic (unary transformations and arithmetic and logical operations) on consonant structures (aka possibility distributions, aka fuzzy numbers),
2) four functions that
2.1) intersect a consonant structure with the consonant structure implied by a p-box,
2.2) intersect a consonant structure with the consonant structure implied by bounds on moments (i.e., min, max, mean and standard deviation),
2.3) intersect a p-box with the p-box implied by a consonant structure, and
2.4) intersect bounds on the moments (min, max, mean and standard deviation) with those implied by a consonant structure, and
3) decisions about whether and how to display outputs,
4) constructor functions (which I guess should be for internal use only...users won't be constructing consonant structures directly).
From our last non-restaurant discussion, I think the first step is implementing the arithmetic for consonant structures that you've described to me, in consultation with your Julia "possibilistic" code. The consonance arithmetic, that is, the transformations and the most of the arithmetic operations are finished in the R code below. I do have a few questions however.
Your Julia code seems to make distinctions between
FuzzyNumber = Fuzzy
Possibility = PossNumber
AbstractPoss
I'm not sure whether or why these should be distinguished. Perhaps it has something to do with your plan to support multiple structures like unions of intervals.
Your Julia possibilistic code has copulas, but I believe you told me that the perfect, opposite and general convolutions were not defined for consonant structures. Do I have the right? If so, I presume that we should the Frechet operations when we have any dependence other than independence. Right?
How should a c-box be incorporated into a new union number? There are both the original c-boxes and Michael's new consonant c-box for binomal rate (which seems to be the only one, but I presume there are analogs for other discrete distributions too).
I presume that, for Michael's consonant binomial rate, we should create a new constructor (taken from his code at http://www.alexvalcon-llc.com/examples/binomial-inference) that makes the consonant structure from some input data for k and n (or k and m), and then just create a cloaking p-box from that structure. Alternatively, the cloaking p-box could be the old p-box-shaped c-box, which is still appropriate for one-sided confidence intervals.
For the original-flavour c-boxes, including all those at https://sites.google.com/site/confidenceboxes/software, I am guessing that the cloaking p-box is what we have now, and the consonant structure is to be inferred from them by the three-step consonantisation.
The four functions 2.1,...,2.4 are mostly about the four credal implications
p-box => conso [uses the "three-step" procedure: double cdf values, flip the right edge (i.e., subtract the doubled p-values from 2), and truncate both sides at the p-value 1]
moments => conso [section 3.4.2.2 in https://dominikhose.github.io/dissertation/diss_dhose.pdf, but note it's not equal to the three-step procedure on mmms()]
conso => p-box [just flip the right edge of the conso]
conso => moments [not sure how to implement section 3.4.1 in https://dominikhose.github.io/dissertation/diss_dhose.pdf]
I've implemented the first three, but am at a loss about how to obtain moments from a consonant structure. We already know how to intersect p-boxes, and I presume that intersections of consonant structures can be obtained with this function:
intersect.consonant = function(x,y) consonant(mb=c(pmax(leftside.consonant(x),leftside.consonant(y)),pmin(rightside.consonant(x),rightside.consonant(y)))) # unless their cores don't intersect
where mb is the (single) array holding the consonant structure's inverse trace, i.e., the quantiles of the discretized membership at alpha levels (1/st) * {1, 2, ..., st, st', st+1, st+2,..., 2*st}
The trace at any point is the sum of the masses of all focal elements that include that point,
Tr(w_i) = Sum_{w_i in A_j} m_j.
Cliff Joslyn says [https://cliffjoslyn.github.io/Docs/JoC01b.pdf] "Depending on the structure present, the resulting point traces and evidence measures have different properties. When consonant, the plausibility measure is naturally a possibility measure. In the more general case of consistency, while the plausibility measure is not a natural possibility measure, the trace is a natural possibility distribution, and a unique and well justified possibilistic approximation of the plausibility measure is available. Finally, consistent approximations of inconsistent random sets are also available (Joslyn, Cliff, 1997, Possibilistic normalization of inconsistent random intervals, Advances in Systems Science and Applications, edited by Wansheng Tang, pp. 44-51).
"Consonant random sets provide a good empirical basis for applications of possibility theory. And yet, consistent but non-consonant random sets are a much more common case [...]."
About consonance, see Dubois and Prade (1990, Consonant approximations of belief functions, Int. J. Approximate Reasoning, v. 4, pp. 419-449).
See also Joslyn (1996, Aggregation and completion of random sets with distributional fuzzy measures, Int. J. of Uncertainty, Fuzziness, and Knowledge-Based Systems, v. 4:4, pp. 307-329).
Not sure how to implement the sigma convolutions. Having a tough time translating the Julia algorithm to R, and don't understand what happens after we melt the Christmas tree. I need to study the "DSS to FN" page on F&PIG.
I don't understand what your function mobiusTransform2D() should be used for. I believe I've implemented the basic tau convolution for Frechet convolutions, and don't think I needed it. Am I wrong?
Finally, I know how to make the medusa head for a consonant structure. They are just the distributions consistent with the Dempster-Shafer structure, right?
By the way, maybe we can refer to a union number as "UN" or "un", like you use fn for fuzzy number. Too skool for cool? Of course, I've been excising the word 'fuzzy' and also 'possibilistic'. It will be much easier to sell to Bayesians if they don't get a whiff of the Zadeh stink.
##############################################################################################
# Miscellaneous functions or shortcuts that should be omitted or discharged during integration
##############################################################################################
source("C:\\Users\\ferson\\Desktop\\pba BETTER.r")
#left = min
#right = max
overlap = function(ff,gg) return(max(left(ff),left(gg)) <= min(right(ff),right(gg)))
overlap.consonant = function(f,g) {cf = core.consonant(f); cg = core.consonant(g); return(max(left(cf),left(cg)) <= min(right(cf),right(cg)))}
#"%/%" <- function(x,y) if (is.uncertain(x) || is.uncertain(y)) frechetconv.pbox(x,y,'/') else do.call(.Primitive("%/%"),list(x,y))
#############################################################################################
# Class specification and principal consonant structure constructor
#############################################################################################
quiet = setClass('consonant',representation(mb='numeric', cr='numeric', su='numeric', st='numeric'))
consonantlevels = 200
consonant = function(a=NULL, b=NULL, c=NULL, d=NULL, cr=NULL,su=NULL,st=consonantlevels,mb=NULL) {
if (missing(a)) {a=b=-Inf; c=d=Inf}
if (missing(b)) b=c=d=a
if (missing(c)) {c=d=b; b=a}
if (missing(d)) {d=c; c=b}
if (missing(su)) su = c(a,d)
if (missing(cr)) cr = c(max(su[[1]],a,b),min(c,d,su[[2]]))
if (missing(mb))
if (all(su==c(-Inf,Inf)))
{mb=c(rep(-Inf,st),rep(Inf,st)); mb[[st]]=cr[[1]]; mb[[st+1]]=cr[[2]]} else
mb = c(seq(left(su), left(cr), length.out=st), seq(right(cr), right(su), length.out=st)) else
{long = length(mb); st = long %/% 2; su = c(mb[1],mb[long]); cr = mb[c(0,1)+st] } # maybe a new st should interpolate a given mb
f <- new("consonant", mb=mb, cr=cr, su=su, st=st)
return(f)
}
fn = fuzzy = consonant
#############################################################################################
# interrogators
#############################################################################################
as.character.consonant = function(f, ...) {
sayint = function(i,...) {l = left(i); h = right(i); if (isTRUE(all.equal(l,h))) h else paste('[',format(l,...),',',format(h,...),']',sep='')}
paste(sep='','C-box: (range=',sayint(f@su),', core=',sayint(f@cr),')')
}
print.consonant = function(f, ...) cat(as.character.consonant(f, ...),'\n')
plot.consonant = function(f,col=3,new=TRUE,xlim=range(f@mb[is.finite(f@mb)]),lwd=2,type='sS',...) if (!is.vacuous.consonant(f)) { # shouldn't we be plotting these as step functions rather than merely connecting the points????
ff = f@st-1
ff = 0:ff/ff
lf = length(f@mb)
if (new) plot(NULL,xlim=xlim,ylim=c(0,1),...)
if (type %in% c('l','b','o')) lines(f@mb,c(ff,rev(ff)),col=col,lwd=lwd,...)
if (type %in% c('p','b','o')) points(f@mb,c(ff,rev(ff)),col=col,lwd=lwd,...)
if (type %in% c('s','S','sS')) {
half = lf %/% 2
lines(f@mb[1:half],ff,col=col,lwd=lwd, type='S',...)
lines(f@mb[half:(half+1)],rep(1,2),col=col,lwd=lwd,...)
lines(f@mb[(half+1):lf],rev(ff),col=col,lwd=lwd, type='s',...)
}}
show.consonant = function(f, ...) {
try(plot.consonant(f, ...))
print.consonant(f)
}
quiet = setMethod("show", "consonant", function(object)show.consonant(object))
left.consonant = function(f) f@mb[[1]]
right.consonant = function(f) f@mb[[length(f@mb)]]
core.consonant = function(f) { f@mb[c(0,1)+f@st] }
support.consonant = range.consonant = function(f) f@mb[c(1, length(f@mb))]
cut.consonant = ci.consonant = confidenceinterval.consonant = function(f, conflevel=0.95, alpha=1-conflevel) f@mb[c(alpha * (f@st-1) + 1, 2*f@st - (f@st-1)*alpha)]
ci.pbox <- confidenceinterval.pbox <- function(b, conf=0.95, alpha=(1-conf)/2, beta=1-(1-conf)/2) interval(left(cut(b,alpha)), right(cut(b,beta)))
is.finite.consonant <- function(x) all(c(is.finite(left(x)),is.finite(right(x))))
is.vacuous.consonant = function(f) all(f@mb[1:f@st]==-Inf) && all(f@mb[(f@st+1):(2*f@st)]==Inf)
leftside.consonant = function(x, st=length(x@mb)%/%2) {ST=length(x@mb)%/%2; x@mb[seq( 1, ST, length.out=st)]}
rightside.consonant = function(x, st=length(x@mb)%/%2) {ST=length(x@mb)%/%2; x@mb[seq(ST+1, 2*ST, length.out=st)]}
#############################################################################################
# conversions among the three magisteria (probability bounds, consonant structures, and moments)
#############################################################################################
makeconsonant.pbox = function(x) { # uses the three-step procedure: double cdf values, flip the right edge (i.e., subtract the doubled p-values from 2), and truncate both sides at the p-value 1
mx = c(x@u, x@d)
lx = length(mx)
half = lx %/% 4
return(fn(mb=rep(mx[c(1:half, (lx-half+1):lx)], rep(2,2*half)))) # does this use outward-directed rounding? does it need to worry about it?
}
makepbox.consonant = function(x) { # just flip the right edge of the consonant structure
return(pbox(u=leftside.consonant(x), d=rightside.consonant(x))) # does this use outward-directed rounding? does it need to worry about it?
}
#makeconsonantfrommoments = function(m,M,mu,sd) # section 3.4.2.2 in https://dominikhose.github.io/dissertation/diss_dhose.pdf
# return(makeconsonant.pbox(mmms(m,M,mu,sd))) # doesn't agree with Dominik...the tails are too fat...is this expected?
makeconsonantfrommoments = function(m,M,mu,sd,st=consonantlevels) {# section 3.4.2.2 in https://dominikhose.github.io/dissertation/diss_dhose.pdf
p=(0:(st-1))/(st-1)
return(consonant(mb=pmax(m,pmin(M,mu + c(-sd/sqrt(p), rev(sd/sqrt(p)))))))
}
moments.consonant = function(x) { # not sure how to implement section 3.4.1 in https://dominikhose.github.io/dissertation/diss_dhose.pdf
# see the red question under the first graph in https://sites.google.com/view/fnpig/faq
b = makepbox.consonant(x)
return(list(m=left.pbox(b), M=right.pbox(b), mu=mean.pbox(b), sd=sd.pbox(b))) # no improvement?
}
makeconsonant.dss = function(x, st=consonantlevels) { # assumes the list of focal elements (pairs of values) are equally weighted and are consonant
# if you crazily reduce the st parameter to match the number of focal elements, note that you'll
# want to plot the resulting structure with the type='l' rather than the default staircase option
L = R = NULL
lx = length(x)
for (i in 1:lx) { L = c(L,left(x[[i]])); R = c(R,right(x[[i]])) }
L = sort(L)
R = sort(R)
dup = st %/% lx # might change st to 1000
L = rep(L, rep(dup,lx))
R = rep(R, rep(dup,lx))
wh = seq(1, length(L), length.out=st)
consonant(mb=c( L[wh], R[wh] ))
}
#############################################################################################
# unary transformations
#############################################################################################
# should agree with Risk Calc for abs [-1,2,3], and abs [-1,.2,.3]
abs.consonant <- function(x) {if (0 <= left(x) x else if (right(x) <= 0) negate(x) else r = abs(range.interval(x)); if (straddles(x)) interval(0, max(r)) else return(interval(min(r),max(r))) }
exp.consonant = function(x) consonant(mb=exp(x@mb))
log.consonant = function(x) consonant(mb=log(x@mb))
sqrt.consonant = function(x) consonant(mb=sqrt(x@mb))
negate.consonant = function(x) consonant(mb=-rev(x@mb))
reciprocate.consonant = function(x) consonant(mb=1/rev(x@mb))
complement.consonant = function(x) consonant(mb=1-x@mb)
shift.consonant = function(s, x) consonant(mb=s+x@mb)
scale.consonant = mult.consonant = function(m, x) if (m < 0) negate.interval(mult.consonant(abs(m),x)) else consonant(mb=m*x@mb)
sign.consonant = function(x) if (right.consonant(x)<0) consonant(-1) else if (0<left.consonant(x)) consonant(1) else consonant(mb=sign(x@mb))
int.consonant = trunc.consonant <- function(x) consonant(mb=trunc(x@mb))
round.consonant = function(x) consonant(mb=round(x@mb))
ceiling.consonant = function(x) consonant(mb=ceiling(x@mb))
truncate.consonant = function(x,m,M) if (overlap(core.consonant(x),c(m,M))) consonant(mb=c(pmax(leftside.consonant(x),m),pmin(rightside.consonant(x),M))) else return('Truncation range excludes core')
asin.consonant = function(x) consonant(mb=asin(x@mb))
acos.consonant = function(x) consonant(mb=acos(x@mb))
atan.consonant = function(x) consonant(mb=atan(x@mb))
positivepart.consonant = function(x) consonant(mb=pmax(0,x@mb)) # zero if no positive part
negativepart.consonant = function(x) consonant(mb=pmin(0,x@mb)) # zero if no negative part
#############################################################################################
# non-convolutional binary operations
#############################################################################################
# mix.consonant # maybe this should just be the makeconsonant.dss() function
intersect.consonant = function(x,y) consonant(mb=c(pmax(leftside.consonant(x),leftside.consonant(y)),pmin(rightside.consonant(x),rightside.consonant(y)))) # unless their cores don't intersect
envelope.consonant = function(x,y) consonant(mb=c(pmin(leftside.consonant(x),leftside.consonant(y)),pmax(rightside.consonant(x),rightside.consonant(y)))) # unless their cores don't intersect
imp.consonant <- function (..., na.rm = FALSE) {
elts <- list(...)
mmm <- elts[[1]]
for (each in elts[-1])
mmm <- consonant(lo = max(left(mmm),left(each),na.rm=na.rm), hi = min(right(mmm),right(each),na.rm=na.rm))
# should check whether they now cross
if (mmm@hi < mmm@lo) {warning('Imposition is empty'); mmm <- NULL}
mmm
}
env.consonant <- function (..., na.rm = FALSE) {
elts <- list(...)
mmm <- elts[[1]]
for (each in elts[-1])
mmm <- consonant(lo = min(left(mmm),left(each),na.rm=na.rm), hi = max(right(mmm),right(each),na.rm=na.rm))
mmm
}
pmin.consonant <- lesser.consonant <- least.consonant <- function (..., na.rm = FALSE) {
elts <- list(...)
mmm <- elts[[1]]
for (each in elts[-1])
mmm <- consonant(lo = min(left(mmm),left(each),na.rm=na.rm), hi = min(right(mmm),right(each),na.rm=na.rm))
mmm
}
pmax.consonant <- bigger.consonant <- biggest.consonant <- function (..., na.rm = FALSE) {
elts <- list(...)
mmm <- elts[[1]]
for (each in elts[-1])
mmm <- consonant(lo = max(left(mmm),left(each),na.rm=na.rm), hi = max(right(mmm),right(each),na.rm=na.rm))
mmm
}
#############################################################################################
# convolutions
#############################################################################################
levelwise.consonant = function(x, y, op = '+') {
if (op=='-') return(levelwise.consonant(x,negate.consonant(y),'+'))
if (op=='/') return(levelwise.consonant(x,reciprocate.consonant(y),'*'))
mx = x@mb
my = y@mb
lx = length(mx)
ly = length(my)
if (lx < ly) my = my[seq(1,ly, length.out=lx)] else if (ly < lx) mx = mx[seq(1,lx, length.out=ly)]
# N.B., we should use outward-directed rounding! The seq() function yields mx[1+(1:lx-1)*((ly-1)/(lx-1))]
return(fn(mb=do.call(op,list(mx,my))))
}
# C = function(x,y) x * y # this is the independence copula pi
mobiusTransform2D = function(x, y, C) {
mx = x@mb
my = y@mb
bx = seq(1, 0, length.out = length(mx))
by = seq(1, 0, length.out = length(my))
cartProd = NULL #Array{IntervalU{Float64},1}[]
masses = NULL
for (i in 1:(length(mx) - 1)) for (j in 1:(length(my) - 1)) {
thisMass = C(bx[i], by[j]) - C(bx[i + 1], by[j]) - C(bx[i], by[j + 1]) + C(bx[i + 1], by[j + 1])
masses = c(masses, thisMass[1])
cartProd = c(cartProd, list(mx[i], my[j]))
}
list(masses, cartProd)
}
sigma.consonant = function(x,y,op='+') {
return(fn(0))
}
#if C.func == BivariateCopulas.opp; return levelwiseOpp(x, y, op = op);end
#if C.func == BivariateCopulas.perf; return levelwise(x, y, op = op);end
zNum = max(length(x.Membership), length(y.Membership))
masses, cartProd = mobiusTransform2D(x, y, C) # Get Cartesian prod and masses from Möbius
zs = [op(ints[1], ints[2]) for ints in cartProd] # Evaluate Cartesian product with interval arithm
# zUniq = unique(zs) # Remove repeated intervals for efficiency
zUniq = zs
#=
lefts = left.(zUniq);
rights = sort(right.(zUniq),rev = true);
subPos = interval.(lefts, rights) =#
mems = [sum(masses[ zs .? this ]) for this in zUniq] # Find beliefs
zPos = range(0, 1, length = zNum + 1)
mz = zUniq[1] # IntervalU{Float64}[]
for i = 2:length(zPos) - 1 # Find alpha cuts for return Fuzzy
theseOnes = zPos[i] .<= mems .< zPos[i + 1]
if !any(theseOnes);
thisInt = zMems[end]
else
sets = reduce(vcat, getfield.(zs[theseOnes], :v))
thisInt = ?(sets)
end
push!(zMems, thisInt);
end
# zMems = sort(zMems, lt = ?, rev= true)
# if !isnested(zMems); zMems= makeCons(zMems);end
return(consonant(mb=zMems))
}
#dw.consonant = function(x,y,op='+') {
# mx = x@mb
# my = y@mb
# lx = length(mx)
# ly = length(my)
# bx = seq(1, 0, length.out = lx)
# by = seq(1, 0, length.out = ly)
# mz = outer(mx,my,op)
# bz = outer(bx,by,C)
# zPosNew = seq(0, 1, length.out = length(mz) + 1)
# mzNew = NULL # IntervalU{Float64}[]
# for (i = 1:(length(zPosNew) - 1)) {
# theseOnes = zPosNew[i] .<= bz .<= zPosNew[i + 1] ############### ?
# if (!any(theseOnes)) thisInt = mzNew[end] else {
# sets = reduce(vcat, getfield.(mz[theseOnes], :v)) ############### ?
# thisInt = ?(sets) ############### ?
# }
# mzNew = c(mzNew, thisInt)
# }
# mzNew = rev(mzNew)
# return(consonant(mb=mzNew))
# }
dw.consonant = function(x,y,op='+') { # if this is called a lot, it might be smart to store the mb inputs formulaically rather than discretely
if (op=='-') return(dw.consonant(x,negate.consonant(y),'+'))
if (op=='/') return(dw.consonant(x,reciprocate.consonant(y),'*'))
mx = x@mb
my = y@mb
lx = length(mx)
ly = length(my)
if (lx < ly) my = my[seq(1,ly, length.out=lx)] else if (ly < lx) mx = mx[seq(1,lx, length.out=ly)] # alternatively, mx[1+(1:lx-1)*((ly-1)/(lx-1))]
# shouldn't we use outward-directed rounding? #################################################################### ?
lx = length(mx) # may have changed
half = lx %/% 4
bottom = c(1:half, (lx-half+1):lx)
mz = do.call(op,list(mx[bottom],my[bottom]))
mz = rep(mz, rep(2,2*half))
return(fn(mb=mz))
}
#############################################################################################
# constructor for Michael Balch's consonant c-box for the binomial rate
#############################################################################################
consonant_binomial_rate = function(n,k, N=consonantlevels-1) {
aa = 0:N/N
cc = binomial_ConfInt(n,k,aa)
#f = consonant(mb=c(cc[,1],rev(cc[,2]))) # I have no idea why this does not work; the construction seems to work outside the function wrapper
f = consonant(mb=c(cc[1:consonantlevels],rev(cc[(consonantlevels+1):(2*consonantlevels)])))
return(f)
}
# Algorithms for binomial inference:
# binomial_pvalue - function for computing two-sided p-values
# binomial_ConfInt - function for computing two-sided confidence intervals
# Developed by Michael Scott Balch, PhD, Alexandria Validation Consulting, LLC, 2018-02-10
# BY USING THESE ALGORITHMS, THE USER ACCEPTS ALL RESPONSIBILITY FOR THEIR APPLICATION.
# These functions are based on formulas derived in Section 5 of Balch, M. (2019) A unimodal consonant confidence structures for binomial inference
# derived using the p-value formulation. These functions are designed for binomial inference in which the data were generated from a fixed number
# of trials experiment. The results may not be appropriate for an experiment with different stopping rules. Further note that, while these algorithms
# do deliver p-values to within machine error and confidence intervals to within user-specified tolerance, it is up to the user to apply them properly.
# However, there is more to statistical inference than correctly executing the mathematics. The user takes all responsibility for the proper application
# of these algorithms.
binomial_pvalue <- function( n , k , th ) { # exact two-sided p-values (i.e., plausibilities) for binomial inference
# INPUTS:
# n, number of trials, must be a single whole number
# k, number of "successes", must be a single non-negative integer less than or equal to n
# th, hypothetical value(s) of parameter (i.e., underlying probability)
# being assessed. Can be a singleton or array of values between zero and one
# OUTPUT:
# pval, pointwise plausibility at desired theta points, given k successes in fixed n trials
# Will be singleton or array with same length as input th
thMin = min(th)
thMax = max(th)
if( ( thMin < 0 ) | ( thMax > 1 ) ){
print('ERROR! ALL THETA VALUES MUST BE IN [0,1]')}
if( (length( n ) != 1) | (length(k) != 1) ){
print('ERROR! Function will only accept a single value each, for n and k.')}
if( (n < 1) | (k > n) | (k < 0) | ( floor(n) != n ) | ( floor(k) != k ) ){
print('ERROR! n must be whole number. k must be integer between 0 and n (inclusive).')}
pval = array( 0 , length(th) )
if( k > 0 ) thLmid = critThetaWalley( n , k , k-1 ) else thLmid = 0
if( k < n ) thRmid = critThetaWalley( n , k , k+1 ) else thRmid = 1
JLoc = which( ( thLmid <= th ) & ( th <= thRmid ) )
pval[JLoc] = 1
thL = thLmid
thR = thRmid
ii = k-1
while( ( ii >= 0 ) & ( thL > thMin ) ){ # working backwards from middle
thR = thL
thL = critThetaWalley( n , k , ii-1 )
JLoc = which( ( thL <= th ) & ( th < thR ) )
pval[JLoc] = pbinom( ii-1 , n , th[JLoc] ) + 1 - pbinom( k-1 , n , th[JLoc] )
ii = ii-1
}
ii=k+1
thR = thRmid
thL = thLmid
while( ( ii <= n ) & ( thR < thMax ) ){ # working forwards from middle
thL = thR
thR = critThetaWalley( n , k , ii+1 )
JLoc = which( ( thL < th ) & ( thR >= th ) )
pval[JLoc] = 1 - pbinom( ii , n , th[JLoc] ) + pbinom( k , n , th[JLoc] )
ii = ii+1
}
return( pval )
}
binomial_ConfInt <- function( n , k , a ){ # two-sided confidence intervals for binomial inference
# INPUTS:
# n, number of trials, must be a single whole number
# k, number of "successes", must be a single non-negative integer less than or equal to n
# a, 1-Confidence level of desired confidence intervals
# Can be a singleton or array of values between zero and one
# OUTPUT:
# ConfBounds, 1-a confidence interval(s)
# a 2D array with 2 columns and with each row corresponding to a value of input "a"
# ConfBounds[,1] are the lower bounds
# ConfBounds[,2] are the upper bounds
tol = a * 1e-6
JLoc = which( tol < 1e-12 )
tol[JLoc] = 1e-12
LB = array( 0 , length(a) )
UB = array( 1 , length(a) )
JLoc = which( a == 0 )
LB[JLoc] = 0
UB[JLoc] = 1
Jelse = setdiff( 1:length(a) , JLoc )
if (length(Jelse) > 0 ) aMin = min( a[Jelse] ) else aMin = 0
p1 = 1
if( k == 0 ) thL = 0 else thL = critThetaWalley( n , k , k-1 )
ii = k-1
while( ( aMin > 0 ) & ( ii >= 0 ) & ( p1 >= aMin ) ){ # Moving Left from Middle
thR = thL
thL = critThetaWalley( n , k , ii-1 )
p3 = p1
p2 = pbinom( ii-1 , n , thR ) + 1 - pbinom( k-1 , n , thR )
p1 = pbinom( ii-1 , n , thL ) + 1 - pbinom( k-1 , n , thL )
# Part One: Discrete Jump
JLoc = which( ( p2 <= a ) & ( a <= p3 ) )
LB[JLoc] = thR
# Part Two: Continuous Pointwise Plausibility Region
JLoc = which( ( p1 < a ) & ( a < p2 ) )
thLsec = array( thL , length(JLoc) )
thRsec = array( thR , length(JLoc) )
errLsec = p1 - a[JLoc]
errRsec = p2 - a[JLoc]
while( length( JLoc ) > 0 ){ # Secant method for a-values in the continuous region
thNew = ( errRsec*thLsec - errLsec*thRsec ) / ( errRsec - errLsec )
errNew = pbinom( ii-1 , n , thNew ) + 1 - pbinom( k-1 , n , thNew ) - a[JLoc]
JFin = which( abs( errNew ) <= tol[JLoc] )
LB[ JLoc[JFin] ] = thNew[JFin] # THIS IS WHERE FINISHED THETA GETS WRITTEN TO BOUND
JL = which( ( ( errNew * errLsec ) > 0 ) & ( abs(errNew) > tol[JLoc] ) )
JR = which( ( ( errNew * errRsec ) > 0 ) & ( abs(errNew) > tol[JLoc] ) )
JKeepGo = sort( c(JL,JR) )
errLsec[JL] = errNew[JL]
errRsec[JR] = errNew[JR]
thLsec[JL] = thNew[JL]
thRsec[JR] = thNew[JR]
thLsec = thLsec[JKeepGo]
thRsec = thRsec[JKeepGo]
errLsec = errLsec[JKeepGo]
errRsec = errRsec[JKeepGo]
JLoc = JLoc[JKeepGo]
}
ii = ii-1
}
p3 = 1
if( k == n ) thR = 1 else thR = critThetaWalley( n , k , k+1 )
ii = k+1
while( ( aMin > 0 ) & ( ii <= n ) & ( p3 >= aMin ) ){ # Moving Right from Middle
thL = thR
thR = critThetaWalley( n , k , ii+1 )
p1 = p3
p2 = 1 - pbinom( ii , n , thL ) + pbinom( k , n , thL )
p3 = 1 - pbinom( ii , n , thR ) + pbinom( k , n , thR )
# Part One: Discrete Jump
JLoc = which( ( p2 <= a ) & ( a <= p1 ) )
UB[JLoc] = thL
# Part Two: Continuous Pointwise Plausibility Region
JLoc = which( ( p3 < a ) & ( a < p2 ) )
thLsec = array( thL , length(JLoc) )
thRsec = array( thR , length(JLoc) )
errLsec = p2 - a[JLoc]
errRsec = p3 - a[JLoc]
while( length( JLoc ) > 0 ){ # Secant method for a-values in the continuous region
thNew = ( errRsec*thLsec - errLsec*thRsec ) / ( errRsec - errLsec )
errNew = 1 - pbinom( ii , n , thNew ) + pbinom( k , n , thNew ) - a[JLoc]
JFin = which( abs( errNew ) <= tol[JLoc] )
UB[ JLoc[JFin] ] = thNew[JFin] # THIS IS WHERE FINISHED THETA GETS WRITTEN TO BOUND
JL = which( ( errNew * errLsec > 0 ) & ( abs(errNew) > tol[JLoc] ) )
JR = which( ( errNew * errRsec > 0 ) & ( abs(errNew) > tol[JLoc] ) )
JKeepGo = sort( c(JL,JR) )
errLsec[JL] = errNew[JL]
errRsec[JR] = errNew[JR]
thLsec[JL] = thNew[JL]
thRsec[JR] = thNew[JR]
thLsec = thLsec[JKeepGo]
thRsec = thRsec[JKeepGo]
errLsec = errLsec[JKeepGo]
errRsec = errRsec[JKeepGo]
JLoc = JLoc[JKeepGo]
}
ii = ii+1
}
ConfBounds = array( c(LB,UB) , c( length(a) , 2 ) )
return( ConfBounds )
}
critThetaWalley <- function( n , kRaw , iRaw ){ # values at which the Walley IM jumps, i.e. th_{k,i} where i < k or k < i
# expects singleton kRaw and vector iRaw
if( length(kRaw) != 1 ) print('ERROR! critThetaWalley ONLY DESIGNED TO HANDLE SINGLETON kRaw')
iVec = g = iRaw*0
kVec = kRaw*0
J0 = which( iRaw < kRaw )
J1 = which( iRaw > kRaw )
J2 = which( iRaw == kRaw )
iVec[J0] = iRaw[J0]
kVec[J0] = kRaw
iVec[J1] = kRaw
kVec[J1] = iRaw[J1]
J3 = which( iVec < (kVec-1) )
J4 = which( iVec == (kVec-1) )
g=iRaw*0
k = kVec[J3]
i = iVec[J3]
g[J3] = ( lbeta( k,n-k+1 )-lbeta( i+1,n-i ) ) / ( k-i-1 )
k = kVec[J4]
i = iVec[J4]
g[J4] = digamma( k ) - digamma( n-i )
thCrit=iRaw*0
J5 = which( g>0 )
J6 = which( g<=0 )
thCrit[J5] = 1 / ( 1 + exp(-g[J5]) )
thCrit[J6] = exp( g[J6] ) / ( 1 + exp( g[J6] ) )
if( length(J2) > 0 ){
print('WARNING! i==k for at least one i-value. Returns NaN')
thCrit[J2] = 0/0
}
return( thCrit )
}
#############################################################################################
# original flavor c-boxes
#############################################################################################
# x[i] ~ Bernoulli(parameter), x[i] is either 0 or 1
nextvalue.bernoulli <- function(x) {n <- length(x); k <- sum(x); return(env(bernoulli(k/(n+1)), bernoulli((k+1)/(n+1))))}
parameter.bernoulli <- function(x) {n <- length(x); k <- sum(x); return(env(beta(k, n-k+1), beta(k+1, n-k)))}
# x[i] ~ binomial(N, p), for known N, x[i] is a nonnegative integer less than or equal to N
nextvalue.binomial <- function(x,N) {n <- length(x); k<-sum(x); return(env(betabinomial(N,k,n*N-k+1),betabinomial(N,k+1, n*N-k)))}
parameter.binomial <- function(x,N) {n <- length(x); k <- sum(x); return(env(beta(k, n*N-k+1), beta(k+1, n*N-k)))}
# x[i] ~ binomial(N, p), for unknown N, x[i] is a nonnegative integer
# see https://sites.google.com/site/cboxbinomialnp/
nextvalue.binomialnp <- function(x) { }
parameter.binomialnp.n <- function(x) { }
parameter.binomialnp.p <- function(x) { }
# x[i] ~ Poisson(parameter), x[i] is a nonnegative integer
nextvalue.poisson <- function(x) {n <- length(x); k = sum(x); return(env(negativebinomial(size=k, prob=1-1/(n+1)),negativebinomial(size=k+1, prob=1-1/(n+1))))}
parameter.poisson <- function(x) {n <- length(x); k = sum(x); return(env(gamma(shape=k, rate=n),gamma(shape=k+1, rate=n)))}
# x[i] ~ exponential(parameter), x[i] is a nonnegative integer
nextvalue.exponential <- function(x) {n <- length(x); k = sum(x); return(gammaexponential(shape=n, rate=k))}
parameter.exponential <- function(x) {n <- length(x); k = sum(x); return(gamma(shape=n, rate=k))}
qgammaexponential <- function(p, shape, rate=1, scale=1/rate) rate * ((1-p)^(-1/shape) - 1)
rgammaexponential <- function(many, shape, rate=1, scale=1/rate) return(qgammaexponential(runif(many),shape,rate,scale))
# x[i] ~ normal(mu, sigma)
nextvalue.normal <- function(x) {n <- length(x); return(mean(x) + sd(x) * student(n - 1) * sqrt(1 + 1 / n))}
parameter.normal.mu <- function(x) {n <- length(x); return(mean(x) + sd(x) * student(n - 1) / sqrt(n))}
parameter.normal.sigma <- function(x) {n <- length(x); return(sqrt(var(x)*(n-1)/chisquared(n-1)))}
# x[i] ~ lognormal(mu, sigma), x[i] is a positive value whose logarithm is distributed as normal(mu, sigma)
nextvalue.lognormal <- function(x) {n <- length(x); return(exp(mean(log(x)) + sd(log(x)) * student(n - 1) * sqrt(1+1/n)))}
parameter.lognormal.mu <- function(x) {n <- length(x); return(mean(log(x)) + sd(log(x)) * student(n - 1) / sqrt(n))}
parameter.lognormal.sigma <- function(x) {n <- length(x); return(sqrt(var(log(x))*(n-1)/chisquared(n-1)))}
# x[i] ~ uniform(minimum, maximum), or
# x[i] ~ uniform(midpoint, width)
nextvalue.uniform <- function(x) {n=length(x); w=(max(x)-min(x))/beta(length(x)-1,2); m=(max(x)-w/2)+(w-(max(x)-min(x)))*uniform(0,1); return(uniform(m-w/2, m+w/2))}
parameter.uniform.minimum <- function(x) {n=length(x); w=(max(x)-min(x))/beta(length(x)-1,2); m=(max(x)-w/2)+(w-(max(x)-min(x)))*uniform(0,1); return(m-w/2)}
parameter.uniform.maximum <- function(x) {n=length(x); w=(max(x)-min(x))/beta(length(x)-1,2); m=(max(x)-w/2)+(w-(max(x)-min(x)))*uniform(0,1); return(m+w/2)}
parameter.uniform.width <- function(x) return((max(x)-min(x))/beta(length(x)-1,2))
parameter.uniform.midpoint <- function(x) {w=(max(x)-min(x))/beta(length(x)-1,2); return((max(x)-w/2)+(w-(max(x)-min(x)))*uniform(0,1))}
# x[i] ~ F, a continuous but unknown distribution
nextvalue.nonparametric <- function(x) return(env(histogram(c(x, Inf)), histogram(c(x, -Inf))))
# x[i] ~ normal(mu1, sigma1), y[j] ~ normal(mu2, sigma2), x and y are independent
parameter.normal.meandifference <- function(x, y) parameter.normal.mu(x) - parameter.normal.mu(y)
#############################################################################################
# miscellaneous c-box functions
#############################################################################################
ratiokm <- function(k,m) return(1/(1+m/k))
ratioKN <- function(k,n) return(k/n)
jeffkm = function(k,m) return(beta(k+0.5, m+0.5))
jeffKN = function(k,n) return(beta(k+0.5, n-k+0.5))
Bzero <- 1e-6
Bone <- 1-Bzero
km <- function(k,m) {
if ((k < 0) || (m < 0)) stop('Improper arguments to function km')
return(pmin(pmax(env(beta(k,m+1),beta(k+1,m)),Bzero),Bone))
}
KN = function(k,n) {
if ((k < 0) || (n < k)) stop('Improper arguments to function KN')
return(pmin(pmax(env(beta(k,n-k+1),beta(k+1,pmax(0,n-k))),Bzero), Bone))
}
Bppv = function(p,s,t) return(1/(1+((1/p-1)*(1-t))/s))
Bnpv = function(p,s,t) return(1/(1+(1-s)/(t*(1/p-1))))
# the mk argments below can be km, KN, jeffkm, jeffKN, ratiokm, ratioKN
ppv = function(pk,pm,sk,sm,tk,tm, mk=km) return(Bppv(mk(pk,pm), mk(sk,sm), mk(tk,tm)))
npv = function(pk,pm,sk,sm,tk,tm, mk=km) return(Bnpv(mk(pk,pm), mk(sk,sm), mk(tk,tm)))
# x[i] = Y + error[i], error[j] ~ F, F unknown, Y fixed, x[i] and error[j] are independent
parameter.nonparametric.deconvolution <- function(x, error) {# i.e., the c-box for Y
z = NULL
for(jj in 1:length(error)) z = c(z, y - error[jj])
z = sort(z)
Q = Get_Q(length(y), length(error))
w = Q / sum( Q )
env(mixture(z,w), mixture(c(z[-1],Inf),w))
}
#############################################################################################
# examples and testing
#############################################################################################
# constructors
par(mfrow=c(4,3))
fn() # doesn't try to plot # c-box: (range=[-Inf,Inf], core=[-Inf,Inf])
fn(1) # c-box: (range=1, core=1)
fn(2,3) # c-box: (range=[2,3], core=[2,3])
fn(1,2,3) # c-box: (range=[1,3], core=2)
fn(1,2,3,4) # c-box: (range=[1,4], core=[2,3])
fn(cr=c(1,2)) # TRIES to plot but fails # c-box: (range=[-Inf,Inf], core=[1,2])
fn(cr=c(1,2),su=c(0,4)) # c-box: (range=[0,4], core=[1,2])
fn(su=c(0,4)) # c-box: (range=[0,4], core=[0,4])
fn(mb=c(1,2,4,6,8,12,15,18,19,19,20,21,22,23,24,25,26,27,28,29)) # c-box: (range=[1,29], core=[10,20])
a = fn(cr=12,su=c(11,13)); A = fn(11,12,13)
b = fn(cr=22,su=c(21,24)); B = fn(21,22,24)
c = fn(cr=c(33,34),su=c(31,38)); C = fn(31,33,34,38)
plot(a); plot(A,new=FALSE,col=2,lwd=1)
plot(b); plot(B,new=FALSE,col=2,lwd=1)
plot(c); plot(C,new=FALSE,col=2,lwd=1)
g = fn(0,1,2,3)
g@mb = sqrt(g@mb)
g = fn(mb = g@mb)
g
# conversions from p-boxes to consonant structures
par(mfcol=c(2,5))
underline = function(x,y=x) for (x in c(x,y)) abline(v=x,col='lightgray')
a = env(U(1,2),U(2,4))
c = makeconsonant.pbox(a)
a; underline(2,4)
c; underline(1.5,3);
a = env(U(1,3),U(2,4))
c = makeconsonant.pbox(a)
a; underline(3,4)
c; underline(2,3);
a = env(U(1,3),U(1,4))
c = makeconsonant.pbox(a)
a; underline(3,4)
c; underline(2,2.5);
a = normal(5,1)
c = makeconsonant.pbox(a)
a; underline(5)
c; underline(5)
a = normal(interval(5,6),1)
c = makeconsonant.pbox(a)
a; underline(5,6)
c; underline(5,6);
# conversion from moments (note that can be used to construct the consonant structure, but they're not preserved in it)
par(mfrow=c(1,3))
cc = makeconsonantfrommoments(0,100,50,5)
cc; title('m=0, M=100, mu=50, sd=5')
mc = moments.consonant(cc) # m=0, M=100, mu=[40.405, 59.595], sd=[0, 16.057]
text(c(5,20,60),c(rep(.5,3),rep(.47,3)),labels=c('mean', as.character(mc$mu),'stdev', as.character(mc$sd)),adj=0,cex=1.3)
cc = makeconsonantfrommoments(0,100,10,1)
cc; title('m=0, M=100, mu=10, sd=1')
mc = moments.consonant(cc) # m=0, M=100, mu=[8.081,12.339], sd=[0, 7.124]
text(c(5,20,60),c(rep(.5,3),rep(.47,3)),labels=c('mean', as.character(mc$mu),'stdev', as.character(mc$sd)),adj=0,cex=1.3)
cc = makeconsonantfrommoments(0,100,10,11)
cc; title('m=0, M=100, mu=10, sd=11')
mc = moments.consonant(cc) # m=0, M=100, mu=[0, 30.810], sd=[0, 23.659]
text(c(5,20,60),c(rep(.5,3),rep(.47,3)),labels=c('mean', as.character(mc$mu),'stdev', as.character(mc$sd)),adj=0,cex=1.3)
# conversion from an explicit Dempster-Shafer structure
par(mfrow=c(1,3))
dss = list(
c(1, 3),
c(1.1, 2.9),
c(1.19999, 2.8),
c(1.3, 2.70001),
c(1.39999, 2.60001),
c(1.5, 2.5),
c(1.6, 2.4),
c(1.69999, 2.3),
c(1.8, 2.20001),
c(1.89999, 2.10001)
)
makeconsonant.dss(dss)
lx = length(dss)
for (i in 1:lx) lines(dss[[i]], rep(0.5/lx + (1/lx) * (i-1), 2) )
underline(dss[[length(dss)]])
# if you crazily reduce the st parameter to match the number of focal elements, note that you'll
# want to plot the resulting structure with the type='l' rather than the default staircase option
cc = makeconsonant.dss(dss,st=10) # st=10 doesn't work well with the default staircase plotting
cc
plot(cc,new=FALSE,type='l',col=2)
lx = length(dss)
for (i in 1:lx) lines(dss[[i]], rep(0.5/lx + (1/lx) * (i-1), 2) )
underline(dss[[length(dss)]])
cc = makeconsonant.dss(dss,st=30) # st needs to be like 3x the number of focal elements for staircasing to look good
cc
#plot(cc,new=FALSE,type='l',col=peek)
lx = length(dss)
for (i in 1:lx) lines(dss[[i]], rep(0.5/lx + (1/lx) * (i-1), 2) )
underline(dss[[length(dss)]])
# unary transformations
par(mfrow=c(4,4))
f = consonant(cr=c(2,4), su=c(1,8))
f; title('[input]')
#abs.consonant(f); title('abs') #####################
exp.consonant(f); title('exp')
log.consonant(f); title('log')
sqrt.consonant(f); title('sqrt')
negate.consonant(f); title('negate')
reciprocate.consonant(f); title('reciprocate')
scale.consonant(2,f); title('scale or shift (red) by 2')
plot(shift.consonant(2,f),new=FALSE,col=2);
sign.consonant(f); title('sign')
int.consonant(f); title('int')
round.consonant(f); title('round')
ceiling.consonant(f); title('ceiling')
truncate.consonant(f,1.7,6); title('truncate [1.7,6]') #####################
atan.consonant(f); title('atan')
f = mult.consonant(1/10,consonant(1,2,4,8))
f; title('[input]')
gg = asin.consonant(f)
plot(gg, xlim=c(0,1.5)); title('asin, acos (red)')
plot(acos.consonant(f),new=FALSE,col=2);
complement.consonant(f) ; title('complement')
# cuts
par(mfcol=c(2,2))
f = consonant(cr=c(1,2), su=c(0,3))
plot(f)
cut.consonant(f,1) # 0 3
cut.consonant(f,0) # 1 2
cut.consonant(f) # 0.04522613 2.94974874
cut.consonant(f,0.95) # 0.04522613 2.94974874
cut.consonant(f,alpha=0.05) # 0.04522613 2.94974874
plot(NULL,xlim=c(0,3),ylim=c(0,1))
for (a in seq(0,1,length.out=15)) {g = cut.consonant(f,alpha=a); lines(g,rep(a,2))}
f = sqrt(consonant(cr=c(1,2), su=c(0,3)))
plot(f)
cut.consonant(f,1) # 0 1.732051
cut.consonant(f,0) # 1 1.414214
cut.consonant(f) # 0.2126644 1.7174833
cut.consonant(f,0.95) # 0.2126644 1.7174833
cut.consonant(f,alpha=0.05) # 0.2126644 1.7174833
plot(NULL,xlim=c(0,1.8),ylim=c(0,1))
for (a in seq(0,1,length.out=15)) {g = cut.consonant(f,alpha=a); lines(g,rep(a,2))}
# convolutions
par(mfcol=c(3,5))
peek='gray90'
f = consonant(cr=c(1,2), su=c(0,3))
g = shift.consonant(1,sqrt(f)) # cannot yet just write "sqrt(f)-1"
plot(f); plot(g,new=FALSE,col='blue'); title('f, g (blue)'); abline(h=0.5, col=peek)
# leftside.consonant(f)
# rightside.consonant(f)
h = intersect.consonant(f,g)
plot(h); title('intersection of f and g')
h = envelope.consonant(f,g)
plot(h); title('envelope of f and g')
H = levelwise.consonant(f,g,'+')
plot(H); title('level-wise sum of f and g')
h = sigma.consonant(f,g,'+')
plot(H,col=peek,lwd=1); plot(h,new=FALSE); title('independent sum of f and g')
h = dw.consonant(f,g,'+')
plot(H,col=peek,lwd=1); plot(h,new=FALSE); title('Frechet sum of f and g')
H = levelwise.consonant(f,g,'-')
plot(H); title('level-wise difference of f and g')
h = sigma.consonant(f,g,'-')
plot(H,col=peek,lwd=1); plot(h,new=FALSE); title('independent difference of f and g')
h = dw.consonant(f,g,'-')
plot(H,col=peek,lwd=1); plot(h,new=FALSE); title('Frechet difference of f and g')
x = fn(1,2,3,6)
y = fn(3,5,6,12)
H = levelwise.consonant(x,y,'+')
plot(NULL,xlim=c(1,12),ylim=c(0,1))
plot(x,new=FALSE); plot(y,new=FALSE,col='blue'); title('x, y (blue)'); abline(h=0.5, col=peek)
z = sigma.consonant(x,y,'+')
plot(H,col=peek,lwd=1); #title('level-wise sum of f and g')
plot(z,new=FALSE); title('Independent sum of x and y')
z = dw.consonant(x,y,'+')
plot(H,col=peek,lwd=1); #title('level-wise sum of f and g')
plot(z,new=FALSE); title('Frechet sum of x and y')
x = fn(1,2,3,6,st=10)
y = fn(3,5,6,12,st=10)
H = levelwise.consonant(x,y,'+')
plot(NULL,xlim=c(1,12),ylim=c(0,1))
plot(x,new=FALSE); plot(y,new=FALSE,col='blue'); title('x, y (blue), 10 steps'); abline(h=0.5, col=peek)
# note that you might prefer to use type='l' to plot these 10-step objects, rather than the default staircase plotting option
z = sigma.consonant(x,y,'+')
plot(H,col=peek,lwd=1); #title('level-wise sum of f and g')
plot(z,new=FALSE); title('Independent sum of x and y')
z = dw.consonant(x,y,'+')
plot(H,col=peek,lwd=1); #title('level-wise sum of f and g')
plot(z,new=FALSE); title('Frechet sum of x and y')
# Balch's consonant binomal rate estimator
par(mfrow=c(1,3))
sh = function(n,k,a, N=consonantlevels-1, many = 200, th = 1:many/many, peek = 'lightgray') {
cb = binomial_pvalue(n,k,th)
cc = binomial_ConfInt(n,k,a)
cat(cc[[1]],cc[[2]],'\n')
plot(th,cb,type='l',lwd=3,col=peek);
abline(h=0.05,col=peek);
lines(rep(cc[[1]],2),c(0,a),col=peek)
lines(rep(cc[[2]],2),c(0,a),col=peek)
f = consonant_binomial_rate(n,k)
plot.consonant(f,new=FALSE,col=2)
}
sh(10, 4, 0.05)
sh(25,25, 0.05)
sh( 5, 1, 0.05)
# original flavor c-boxes
par(mfrow=c(4,4))
signif.interval = function(i,...) return(interval(signif(i@lo,...),signif(i@hi,...))) # e.g., signif(inteval(1,2)/3)
plotem <- function(t,d,p,q,r) {
if (!is.null(p)) {
dp = c(d,p@u,p@d)
dp = range(dp[is.finite(dp)])
plot(p,xlim=dp)
edf(d,col='gray')
rug(d,col='blue')
title(paste('next',t[1],'value'))
}
if (!missing(q)) {
plot(q)
plot(makeconsonant.pbox(q),new=FALSE,col=3)
title(t[2])
cat('95% confidence interval for the',t[2],'is ')
cat(sayint(signif(ci.pbox(q),3)))
cat('\n')
}
if (!missing(r)) {
plot(r)
plot(makeconsonant.pbox(r),new=FALSE,col=3)
title(t[3])
cat('95% confidence interval for the',t[3],'is ')
cat(sayint(signif(ci.pbox(r),3)))
cat('\n')
}
}
d = (runif(7) < 0.2) + 0
a = nextvalue.bernoulli(d)
b = parameter.bernoulli(d)
plotem(c('Bernoulli','Bernoulli parameter'),d,a,b)
d = rbinom(5,12,0.6)
a = nextvalue.binomial(d,12)
b = parameter.binomial(d,12)
plotem(c('binomial','binomial parameter'),d,a,b)
d = rpois(400, 12)
a = nextvalue.poisson(d)
b = parameter.poisson(d)
plotem(c('Poisson','Poisson rate'),d,a,b)
d = rexp(7, 10)
a = nextvalue.exponential(d) ######################################### ERROR, in env2.4?
b = parameter.exponential(d)
plotem(c('exponential','exponential rate'),d,a,b)
d = rnorm(6,25,5)
a = nextvalue.normal(d)
b = parameter.normal.mu(d)
c = parameter.normal.sigma(d)
plotem(c('normal','normal mean','normal sd'),d,a,b,c)
d = rlnorm(6,5,1)
a = nextvalue.lognormal(d)
b = parameter.lognormal.mu(d)
c = parameter.lognormal.sigma(d)
plotem(c('lognormal','mean of logs','sd of logs'),d,a,b,c)
d = c(2,3,5,8,3,4)
a = nextvalue.nonparametric(d)
plotem('nonparametric',d,a)
d1 = rnorm(5,20,2)
d2 = rnorm(7,16,3)
b = parameter.normal.meandifference(d1,d2)
plotem(c('','difference of means'),NULL,NULL,b)
# # #
# # #
# # #
# # #
# # #
# # #