For two, avoid dynamite plots (see Drummond & Vowler, 2011), and utilize dot plots since you only have 15 participants. You can super-impose confidence lines over the dot plots, and you can create a category axis label to label the dots/bars/lines, foregoing the need to differentiate between categories using color, point/line symbols or other hashings.

I will post back an example using your data later, but for now the paper cited above has several examples perfectly applicable to your situation, and one is inserted below.

Since you tagged the question `r`

, this previous question has applicable code snippets to generate similar charts, Alternative graphics to “handle bar” plots.

# Citation

Drummond, Gordon B. & Sarah L. Vowler. 2011. Show the data, don't
conceal them. *The
Journal of Physiology* 598(8): 1861-1863. PDF available from
publisher.

Note this article is
being simultaneously published in 2011 in The Journal of Physiology,
Experimental Physiology, The British Journal of Pharmacology, Advances
in Physiology Education, Microcirculation, and Clinical and
Experimental Pharmacology and Physiology

Below is an example extended to your data. I have posted full examples of generating similar plots in R using ggplot2 and in SPSS on my blog in this post, Avoid Dynamite Plots! Visualizing dot plots with super-imposed confidence intervals in SPSS and R.

Before I receive your data I would like to take the "bully pulpit" and expound on the task at hand and how I would go about solving this riddle. Your suggested approach I believe is to form an ARIMA model using procedures which implicitly specify no time trend variables thus incorrectly concluding about required differencing etc.. You assume no outliers, pulses/seasonal pulses and no level shifts(intercept changes). After probable mis-specification of the ARIMA filter/structure you then assume 1 trend/1 intercept and piece it together. This is an approach which although programmable is fraught with logical flaws never mind non-constant error variance or non-constant parameters over time.

The first step in analysis is to list the possible sample space that should be investigated and in the absence of direct solution conduct a computer based solution (trial and error) which uses a myriad of possible trials/combinations yielding a possible suggested optimal solution.

The sample space contains

- the number of distinct trends

2 the number of possible intercepts

3 the number and kind of differencing operators

- the form of the ARMA model

5 the number of one-time pulses

6 the number of seasonal pulses ( seasonal factors )

7 any required error variance change points suggesting the need for weighted Least Squares

8 any required power transformation reflecting a linkage between the error variance and the expected value

Simply evaluate all possible permutations of these 8 factors and select that unique combination that minimizes some error measurement because ORDER IS IMPORTANT !
.

If this is onerous , so be it and I look forward to receiving your tsim2 so I can (possibly) demonstrate an approach that speaks to this "thorny issue" using some of my favorite toys.

Note that if you simulated (tightly) then your approach might be the answer but the question that I have is "your approach robust to data violations" or is simply a cook-book approach that works on this data set and fails on others. Trust but Verify !

EDITED AFTER RECEIPT OF DATA (100 VALUES)

I trust that this discussion will highlight the need for comprehensive/programmable approaches to forming useful models. As discussed above an efficient computer based tournament looking at possible different combinations (max of possible 256 ) yielded the following suggest initial model approach .

The concept here is to "duplicate/approximate the human eye" by examining competing alternatives which is what (in my opinion) we do when performing visual identification of structure. Note this case most eyeballs will not see the level shift at period 65 and simply focus on the major break in trend around period 51.

1 IDENTIFY DETERMINISTIC BREAK POINTS IN TREND
2 IDENTIFY INTERCEPT CHANGES
2 EVALUATE NEED FOR ARIMA AUGMENTATION
4 EVALUATE NEED FOR PULSES

```
SIMPLIFY VIA NECESSITY TESTS
```

detailing both a trend change (51) and an intercept change (65). Model diagnostic checking (always a good idea in iterative approaches to model form) yielded the following acf suggesting that improvement was necessary to render a set of residuals free of structure. An augmented model was then suggested of the form with an insignificant AR(1) coefficient.

The final model is here with model statistics and here

The residuals from this model are presented here with an acf of

The Actual/Fit and Forecast graph is here . The cleansed vs the actual is revealing as it details the level shift effect

In summary where the OP simulated a (1,1,0) for the fitst 50 observations, he then abridged the last 50 observations effectively coloring/changing the composite ARMA process to a (1,0,0) while embodying the empirically identified 3 predictors.

Comprehensive data analysis incorporating advanced search procedures is the objective . This data set is "thorny" and I look forward to any suggested improvements that may arise from this discussion. I used a beta version of AUTOBOX (which I have helped to develop) as my tool of choice.

As to your "proposed method" it may work for this series but there are way too many assumptions such as one and only one stochastic trends, one and only one deterministic trend (1,2,3,...), no pulses , no level shifts (intercept changes) , no seasonal pulses , constant error variance , constant parameters over time et al to suggest generality of approach. You are arguing from the specific to the general. There are tons of wrong ad hoc solutions waiting to be specified and just a handful of "correct solutions" of which my approach is just one.

A close-up showing observations 51 to 100 suggest a significant deviation/change in pattern (i.e. implied intercept) starting at period 65 ( which was picked/identified by the analytics as a level shift (change in intercept)) suggesting a possible simulation flaw as obs 51-64 have a different pattern than obs 65-100.

## Best Answer

The plot itself is perhaps the best way to present the tendency.Consider supplementing it with a

robust visual indication of trend,such as a lightly colored line or curve. Building on psychometric principles (lightly and with some diffidence), I would favor an exponential curve determined by, say, the median values of the first third of the questions and the median values of the last third of the questions.An equivalent description is to fit a straight line on a log-linear plot, as shown here.

This visualization has been engineered to support the apparent objectives of the question:A title tells the reader what you want them to know.

The connecting line segments are visually suppressed because they are

notthe message.The fitted line is made most prominent visually because it is the basic statistical summary -- it

isthe message.Points that are significantly beyond the values of the fitted line (with a Bonferroni adjustment for 20 comparisons) are highlighted by making them brighter and coloring them prominently. (This assumes the vertical error bars are two-sided confidence intervals for a confidence level near 95%.)

The line is summarized by a single statistical measure of trend, displayed in the subtitle at the bottom:

it represents an average 6.2% decrease in working time for each successive question.This line passes through the median of the first five answer times (horizontally located at the median of the corresponding question numbers 0,1,2,3,4) and the median of the last five answer times (horizontally located at the median of the corresponding question numbers (16, 17, 18, 19, 20). This technique of using medians of the data at either extreme is advocated by John Tukey in his book

EDA(Addison-Wesley 1977).Some judgment is needed.Tukey often used the first third and last third of the data when making such exploratory fits. When I do that here, the left part of the line barely changes (it should not, since the data are consistent in that part of the plot) while the right part changes appreciably, reflecting both the greater variation in times and the greater standard errors there:This time, however, (a) there are more badly fit points and (b) they consistently fall

belowthe line. This suggests this fit does not have a sufficiently negative slope. Thus, we can have confidence that the initial exploratory estimate of $-6\%$ (or so) is one of the best possible descriptions of the trend.