1
00:00:00,025 --> 00:00:05,131
[SOUND] So
they often use multivariate decomposition

2
00:00:05,131 --> 00:00:09,547
methods to study functional connectivity.

3
00:00:09,547 --> 00:00:13,129
These methods provide a decomposition
of the data into separate components.

4
00:00:13,129 --> 00:00:16,038
And they can be used to find
coherent brain networks and

5
00:00:16,038 --> 00:00:20,349
provide information on how different
brain regions interact with one another.

6
00:00:21,490 --> 00:00:26,410
The most common decomposition methods are
principal components analysis, or PCA, and

7
00:00:26,410 --> 00:00:28,450
independent components analysis, or ICA.

8
00:00:30,190 --> 00:00:35,510
So throughout, we're going to organize
the fMRI data in an M x N matrix X.

9
00:00:35,510 --> 00:00:38,160
The row dimension is
the number of time points and

10
00:00:38,160 --> 00:00:40,520
the column dimension the number of voxels.

11
00:00:40,520 --> 00:00:45,920
So we're just going to put all the data
together in a time by voxel matrix.

12
00:00:45,920 --> 00:00:51,960
So here in contrast to say
the GLM mode where we analyze

13
00:00:51,960 --> 00:00:56,610
each voxel at a time, here we're going to
analyze all the voxels simultaneously.

14
00:00:58,960 --> 00:01:02,780
So principle components analysis
Is a mutivariate procedure

15
00:01:02,780 --> 00:01:04,365
concerned with explaining the variance,

16
00:01:04,365 --> 00:01:06,810
co-variance structure of a high
dimensional random vector.

17
00:01:07,890 --> 00:01:12,420
So in PCA, a set of correlated variables
are transformed into a set of uncorrelated

18
00:01:12,420 --> 00:01:15,900
variables ordered by the amount of
variability in the data that they explain.

19
00:01:18,120 --> 00:01:23,802
So in fMRI principal component analysis
involves finding the spatial modes,

20
00:01:23,802 --> 00:01:26,046
or eigenimages, in the data.

21
00:01:26,046 --> 00:01:27,576
These are the patterns that account for

22
00:01:27,576 --> 00:01:30,470
most of the various-covariance
structure in the data.

23
00:01:30,470 --> 00:01:34,011
And they're ranked in order of the amount
of variation that they explain.

24
00:01:34,011 --> 00:01:38,770
The eigenimages images can be contained
using singular value decomposition or

25
00:01:38,770 --> 00:01:43,101
SVD, which decomposes the data into
two sets of orthogonal vectors that

26
00:01:43,101 --> 00:01:45,670
correspond to patterns in space and time.

27
00:01:47,280 --> 00:01:52,475
So, the single value of decomposition is
an operation that decomposes the matrix X,

28
00:01:52,475 --> 00:01:54,300
into three other matrices.

29
00:01:54,300 --> 00:01:58,549
So, we write X is equal to U
times S times V transpose,

30
00:01:58,549 --> 00:02:02,799
where V transpose V is equal
to the identity matrix and

31
00:02:02,799 --> 00:02:06,878
U transpose U is also equal
to the identity matrix.

32
00:02:06,878 --> 00:02:11,680
And S is a diagonal matrix whose
elements are called singular values.

33
00:02:13,290 --> 00:02:16,370
So, pictorially,
we can write this as follows.

34
00:02:16,370 --> 00:02:21,392
We can take our matrix, X,
which again was time by voxels,

35
00:02:21,392 --> 00:02:25,921
and decompose it into three matrices,
U, S, and V.

36
00:02:25,921 --> 00:02:30,360
Pictorially we can represent the singular
value decomposition as follows.

37
00:02:30,360 --> 00:02:35,273
Here we take the matrix X which
is again time by voxels, and

38
00:02:35,273 --> 00:02:39,297
we can separate it into U,
S, and V Transpose.

39
00:02:39,297 --> 00:02:43,820
So here what I'm going to claim
is that the columns of V or

40
00:02:43,820 --> 00:02:47,768
the rows of V transpose
are the Eigenimages, and

41
00:02:47,768 --> 00:02:52,940
the columns of U represent
the corresponding time courses.

42
00:02:52,940 --> 00:02:56,369
So these are the time courses that
correspond to the respective Eigenimages.

43
00:02:57,860 --> 00:03:03,940
So, we can write this as X is equal to
U times S V transpose, but because of

44
00:03:03,940 --> 00:03:10,254
the diagonal nature of S we can also
decompose it into each column of U and V.

45
00:03:10,254 --> 00:03:16,139
We can write S1 times U1 V1 transpose,
etc for each of the subsequent columns.

46
00:03:17,190 --> 00:03:19,500
Here we see a real data example.

47
00:03:19,500 --> 00:03:22,570
Here we have x is the first image here.

48
00:03:22,570 --> 00:03:26,551
And this can be decomposed into
a number of sub-matrices as

49
00:03:26,551 --> 00:03:28,886
indicated on the previous slide.

50
00:03:28,886 --> 00:03:34,050
The first sub-matrix consists
of S1 which is a scaler

51
00:03:34,050 --> 00:03:38,140
times U1 which is a time course
corresponding to the first Eigenimage.

52
00:03:38,140 --> 00:03:41,671
And v1 transpose which
is the first Eigenimage.

53
00:03:41,671 --> 00:03:46,212
Then we have the second
sub-matrix which is S2 times U2.

54
00:03:46,212 --> 00:03:50,555
Which is the time course corresponding
to the second Eigenimage V2

55
00:03:50,555 --> 00:03:54,160
transpose which is the second
Eigenimage etc., etc.

56
00:03:54,160 --> 00:03:59,030
Now, each of these Vs Have a length of
the number of voxels, and they can be

57
00:03:59,030 --> 00:04:04,100
sort of reconstructed into images
corresponding to the spacial modes here.

58
00:04:04,100 --> 00:04:09,250
So here we see the first Eigenimage
which is the V1 transpose here,

59
00:04:09,250 --> 00:04:13,020
and we have U1 transpose which is
the corresponding time course.

60
00:04:13,020 --> 00:04:15,990
Similarly, we get the second Eigenimage,
and

61
00:04:15,990 --> 00:04:19,300
it's corresponding time of course,
etc., etc.

62
00:04:19,300 --> 00:04:22,340
So if we do this,
we can get several different

63
00:04:22,340 --> 00:04:26,670
temporal components corresponding
to each of the columns of U,

64
00:04:26,670 --> 00:04:31,510
and we get the corresponding
Eigenimages below.

65
00:04:31,510 --> 00:04:34,150
And here's an example of a PCA analysis.

66
00:04:34,150 --> 00:04:38,220
Here we see the first four
Eigenimage on the bottom, so

67
00:04:38,220 --> 00:04:44,564
in the bottom panel we see four rows,
one for each Eigenimage.

68
00:04:44,564 --> 00:04:47,829
And on the top panel we see
the corresponding time courses.

69
00:04:49,030 --> 00:04:53,670
And to the right of these time courses,
you see percentages.

70
00:04:53,670 --> 00:04:57,940
Those percentages are the percent
of variation, explained by each

71
00:04:57,940 --> 00:05:02,710
of the components, and they're related to
the values of S in the singular matrix.

72
00:05:04,310 --> 00:05:08,270
>> So independent component analysis, or
ICA, is a family of techniques used to

73
00:05:08,270 --> 00:05:11,730
extract independent signals
from some source signal.

74
00:05:11,730 --> 00:05:15,440
ICA provides a method to blindly separate
the data into spatially independent

75
00:05:15,440 --> 00:05:17,150
components.

76
00:05:17,150 --> 00:05:20,000
Here, the key assumption is
that the data set consists of p

77
00:05:20,000 --> 00:05:24,400
spatially independent components which
are linearly mixed, but spatially fixed.

78
00:05:25,620 --> 00:05:30,530
The ICA model differs a little
bit from what we used in PCA.

79
00:05:30,530 --> 00:05:35,160
Here, the matrix x is decomposed
into two matrices, A and S.

80
00:05:36,330 --> 00:05:40,290
Here A is referred to as the mixing
matrix and S as the source matrix.

81
00:05:41,380 --> 00:05:46,139
So our goal is ultimately to
use this information to find

82
00:05:46,139 --> 00:05:50,288
an un-mixing matrix W such
that Y is equal to WX,

83
00:05:50,288 --> 00:05:56,587
provides a good approximation to S
which are these independent sources.

84
00:05:56,587 --> 00:06:00,730
If the mixing matrix is known, the problem
is straight forward and almost trivial.

85
00:06:00,730 --> 00:06:06,330
However, ICA tries to solve this problem
without knowing the mixing parameters.

86
00:06:06,330 --> 00:06:09,480
So instead, what it does is it
exploits some key assumptions.

87
00:06:09,480 --> 00:06:13,020
First it assumes there is
linear mixing of sources,

88
00:06:13,020 --> 00:06:17,850
then it assumes that the components SI are
statistically independent of one another.

89
00:06:17,850 --> 00:06:21,600
And it assumes that the components
are non-Gaussian, or most 1.

90
00:06:21,600 --> 00:06:23,930
>> It can be Gaussian.

91
00:06:23,930 --> 00:06:26,250
When applying ICA for fMRIs,

92
00:06:26,250 --> 00:06:30,850
assume that fMRI data can be modeled by
identifying sets of voxels whose activity

93
00:06:30,850 --> 00:06:35,400
vary both over time and
are different from activity in other sets.

94
00:06:35,400 --> 00:06:39,250
We try to decompose the data into
spatially independent component maps

95
00:06:39,250 --> 00:06:41,370
with a set of corresponding time courses.

96
00:06:42,940 --> 00:06:47,228
Here's the kind of cartoon image here we
have x, which again is time by voxels,

97
00:06:47,228 --> 00:06:48,254
that's our data.

98
00:06:48,254 --> 00:06:50,754
And we have two matrixes a and s.

99
00:06:50,754 --> 00:06:55,800
So again, S represents the spatially
independent components, one for each row.

100
00:06:57,590 --> 00:07:01,731
The columns of A represent the time
courses corresponding to the spatially

101
00:07:01,731 --> 00:07:03,253
independent components.

102
00:07:03,253 --> 00:07:08,530
So the first column corresponds
to the first row of S.

103
00:07:10,990 --> 00:07:15,659
And what we want to do here is use
an ICA algorithm to find both A and S.

104
00:07:17,350 --> 00:07:19,710
Here's an example fitting ICA.

105
00:07:19,710 --> 00:07:21,800
And here's two different components.

106
00:07:21,800 --> 00:07:26,750
So this corresponds to two
different spatial components

107
00:07:26,750 --> 00:07:30,450
from the ICA decomposition.

108
00:07:30,450 --> 00:07:33,504
And first we see a task related component.

109
00:07:33,504 --> 00:07:36,980
And in a second,
we see a more noise component.

110
00:07:36,980 --> 00:07:41,988
And see here, you can see that this is
a noise component by a lot of activation

111
00:07:41,988 --> 00:07:47,713
around the edges of the brain, which is
probably due to emotion-related artifacts.

112
00:07:47,713 --> 00:07:50,405
Here's an example of eight
of the most common and

113
00:07:50,405 --> 00:07:55,630
consistently identified resting state
networks, which are identified by ICA.

114
00:07:55,630 --> 00:08:00,110
And so this week, we looked at the
previous lecture on resting state fMRI.

115
00:08:00,110 --> 00:08:02,340
And we showed these results.

116
00:08:02,340 --> 00:08:07,180
And here, we can now come back to this and
say that this was obtained using ICA.

117
00:08:08,860 --> 00:08:11,440
So what's the difference between PCA and
ICA?

118
00:08:11,440 --> 00:08:14,870
Well PCA assumes
an orthonormality constraint.

119
00:08:14,870 --> 00:08:19,240
In contrast ICA assumes statistical
independence among a collection of spatial

120
00:08:19,240 --> 00:08:20,520
patterns.

121
00:08:20,520 --> 00:08:24,380
So independence is a stronger
requirement than orthonormality.

122
00:08:24,380 --> 00:08:28,990
However in ICA, the spatially independent
components are not ranked in order of

123
00:08:28,990 --> 00:08:32,430
importance such as they
are when performing PCA.

124
00:08:32,430 --> 00:08:36,160
So it behooves you to go through all
the components after the fact and find out

125
00:08:36,160 --> 00:08:39,460
which ones are important and which ones
are just related to noise and whatnot.

126
00:08:40,480 --> 00:08:42,530
Okay, so that's the end of this module.

127
00:08:42,530 --> 00:08:44,920
Here we've introduced principle
component analysis and

128
00:08:44,920 --> 00:08:46,200
independent component analysis.

129
00:08:46,200 --> 00:08:51,780
So these are two ways of taking
the full time by voxel data and

130
00:08:51,780 --> 00:08:54,030
finding interesting patterns
of activation in it.

131
00:08:54,030 --> 00:08:58,240
And so, these are commonly used in
functional connectivity analysis.

132
00:08:58,240 --> 00:09:02,672
Okay, in the next module, we'll talk
a little bit about dynamic connectivity.

133
00:09:02,672 --> 00:09:03,642
See you then, bye.

134
00:09:03,642 --> 00:09:05,087
[SOUND]


