zoo.sota
QuickNetSmall¶
larq_zoo.sota.QuickNetSmall(
*,
input_shape=None,
input_tensor=None,
weights="imagenet",
include_top=True,
num_classes=1000
)
Instantiates the QuickNetSmall architecture.
Optionally loads weights pre-trained on ImageNet.
Model Summary
+quicknet_small stats-------------------------------------------------------------------------------------------+
| Layer Input prec. Outputs # 1-bit # 32-bit Memory 1-bit MACs 32-bit MACs |
| (bit) x 1 x 1 (kB) |
+---------------------------------------------------------------------------------------------------------------+
| input_1 - (-1, 224, 224, 3) 0 0 0 ? ? |
| quant_conv2d - (-1, 112, 112, 8) 0 216 0.84 0 2709504 |
| batch_normalization - (-1, 112, 112, 8) 0 16 0.06 0 0 |
| activation - (-1, 112, 112, 8) 0 0 0 ? ? |
| quant_depthwise_conv2d - (-1, 56, 56, 8) 0 72 0.28 0 225792 |
| batch_normalization_1 - (-1, 56, 56, 8) 0 16 0.06 0 0 |
| quant_conv2d_1 - (-1, 56, 56, 32) 0 256 1.00 0 802816 |
| batch_normalization_2 - (-1, 56, 56, 32) 0 64 0.25 0 0 |
| quant_conv2d_2 1 (-1, 56, 56, 32) 9216 0 1.12 28901376 0 |
| batch_normalization_3 - (-1, 56, 56, 32) 0 64 0.25 0 0 |
| tf.__operators__.add - (-1, 56, 56, 32) 0 0 0 ? ? |
| quant_conv2d_3 1 (-1, 56, 56, 32) 9216 0 1.12 28901376 0 |
| batch_normalization_4 - (-1, 56, 56, 32) 0 64 0.25 0 0 |
| tf.__operators__.add_1 - (-1, 56, 56, 32) 0 0 0 ? ? |
| quant_conv2d_4 1 (-1, 56, 56, 32) 9216 0 1.12 28901376 0 |
| batch_normalization_5 - (-1, 56, 56, 32) 0 64 0.25 0 0 |
| tf.__operators__.add_2 - (-1, 56, 56, 32) 0 0 0 ? ? |
| quant_conv2d_5 1 (-1, 56, 56, 32) 9216 0 1.12 28901376 0 |
| batch_normalization_6 - (-1, 56, 56, 32) 0 64 0.25 0 0 |
| tf.__operators__.add_3 - (-1, 56, 56, 32) 0 0 0 ? ? |
| activation_1 - (-1, 56, 56, 32) 0 0 0 ? ? |
| max_pooling2d - (-1, 55, 55, 32) 0 0 0 0 0 |
| depthwise_conv2d - (-1, 28, 28, 32) 0 288 1.12 0 225792 |
| quant_conv2d_6 - (-1, 28, 28, 64) 0 2048 8.00 0 1605632 |
| batch_normalization_7 - (-1, 28, 28, 64) 0 128 0.50 0 0 |
| quant_conv2d_7 1 (-1, 28, 28, 64) 36864 0 4.50 28901376 0 |
| batch_normalization_8 - (-1, 28, 28, 64) 0 128 0.50 0 0 |
| tf.__operators__.add_4 - (-1, 28, 28, 64) 0 0 0 ? ? |
| quant_conv2d_8 1 (-1, 28, 28, 64) 36864 0 4.50 28901376 0 |
| batch_normalization_9 - (-1, 28, 28, 64) 0 128 0.50 0 0 |
| tf.__operators__.add_5 - (-1, 28, 28, 64) 0 0 0 ? ? |
| quant_conv2d_9 1 (-1, 28, 28, 64) 36864 0 4.50 28901376 0 |
| batch_normalization_10 - (-1, 28, 28, 64) 0 128 0.50 0 0 |
| tf.__operators__.add_6 - (-1, 28, 28, 64) 0 0 0 ? ? |
| quant_conv2d_10 1 (-1, 28, 28, 64) 36864 0 4.50 28901376 0 |
| batch_normalization_11 - (-1, 28, 28, 64) 0 128 0.50 0 0 |
| tf.__operators__.add_7 - (-1, 28, 28, 64) 0 0 0 ? ? |
| activation_2 - (-1, 28, 28, 64) 0 0 0 ? ? |
| max_pooling2d_1 - (-1, 27, 27, 64) 0 0 0 0 0 |
| depthwise_conv2d_1 - (-1, 14, 14, 64) 0 576 2.25 0 112896 |
| quant_conv2d_11 - (-1, 14, 14, 256) 0 16384 64.00 0 3211264 |
| batch_normalization_12 - (-1, 14, 14, 256) 0 512 2.00 0 0 |
| quant_conv2d_12 1 (-1, 14, 14, 256) 589824 0 72.00 115605504 0 |
| batch_normalization_13 - (-1, 14, 14, 256) 0 512 2.00 0 0 |
| tf.__operators__.add_8 - (-1, 14, 14, 256) 0 0 0 ? ? |
| quant_conv2d_13 1 (-1, 14, 14, 256) 589824 0 72.00 115605504 0 |
| batch_normalization_14 - (-1, 14, 14, 256) 0 512 2.00 0 0 |
| tf.__operators__.add_9 - (-1, 14, 14, 256) 0 0 0 ? ? |
| quant_conv2d_14 1 (-1, 14, 14, 256) 589824 0 72.00 115605504 0 |
| batch_normalization_15 - (-1, 14, 14, 256) 0 512 2.00 0 0 |
| tf.__operators__.add_10 - (-1, 14, 14, 256) 0 0 0 ? ? |
| quant_conv2d_15 1 (-1, 14, 14, 256) 589824 0 72.00 115605504 0 |
| batch_normalization_16 - (-1, 14, 14, 256) 0 512 2.00 0 0 |
| tf.__operators__.add_11 - (-1, 14, 14, 256) 0 0 0 ? ? |
| activation_3 - (-1, 14, 14, 256) 0 0 0 ? ? |
| max_pooling2d_2 - (-1, 13, 13, 256) 0 0 0 0 0 |
| depthwise_conv2d_2 - (-1, 7, 7, 256) 0 2304 9.00 0 112896 |
| quant_conv2d_16 - (-1, 7, 7, 512) 0 131072 512.00 0 6422528 |
| batch_normalization_17 - (-1, 7, 7, 512) 0 1024 4.00 0 0 |
| quant_conv2d_17 1 (-1, 7, 7, 512) 2359296 0 288.00 115605504 0 |
| batch_normalization_18 - (-1, 7, 7, 512) 0 1024 4.00 0 0 |
| tf.__operators__.add_12 - (-1, 7, 7, 512) 0 0 0 ? ? |
| quant_conv2d_18 1 (-1, 7, 7, 512) 2359296 0 288.00 115605504 0 |
| batch_normalization_19 - (-1, 7, 7, 512) 0 1024 4.00 0 0 |
| tf.__operators__.add_13 - (-1, 7, 7, 512) 0 0 0 ? ? |
| quant_conv2d_19 1 (-1, 7, 7, 512) 2359296 0 288.00 115605504 0 |
| batch_normalization_20 - (-1, 7, 7, 512) 0 1024 4.00 0 0 |
| tf.__operators__.add_14 - (-1, 7, 7, 512) 0 0 0 ? ? |
| quant_conv2d_20 1 (-1, 7, 7, 512) 2359296 0 288.00 115605504 0 |
| batch_normalization_21 - (-1, 7, 7, 512) 0 1024 4.00 0 0 |
| tf.__operators__.add_15 - (-1, 7, 7, 512) 0 0 0 ? ? |
| activation_4 - (-1, 7, 7, 512) 0 0 0 ? ? |
| average_pooling2d - (-1, 1, 1, 512) 0 0 0 0 0 |
| flatten - (-1, 512) 0 0 0 0 0 |
| quant_dense - (-1, 1000) 0 513000 2003.91 0 512000 |
| activation_5 - (-1, 1000) 0 0 0 ? ? |
+---------------------------------------------------------------------------------------------------------------+
| Total 11980800 674888 4098.78 1156055040 15941120 |
+---------------------------------------------------------------------------------------------------------------+
+quicknet_small summary-----------------------+
| Total params 12.7 M |
| Trainable params 12.6 M |
| Non-trainable params 11.8 k |
| Model size 4.00 MiB |
| Model size (8-bit FP weights) 2.07 MiB |
| Float-32 Equivalent 48.28 MiB |
| Compression Ratio of Memory 0.08 |
| Number of MACs 1.17 B |
| Ratio of MACs that are binarized 0.9864 |
+---------------------------------------------+
ImageNet Metrics
Top-1 Accuracy | Top-5 Accuracy | Parameters | Memory |
---|---|---|---|
59.4 % | 81.8 % | 12 655 688 | 4.00 MB |
Arguments
- input_shape
Sequence[int | None] | None
: Optional shape tuple, to be specified if you would like to use a model with an input image resolution that is not (224, 224, 3). It should have exactly 3 inputs channels. - input_tensor
tf.Tensor | keras.engine.keras_tensor.KerasTensor | None
: optional Keras tensor (i.e. output oflayers.Input()
) to use as image input for the model. - weights
str | None
: one ofNone
(random initialization), "imagenet" (pre-training on ImageNet), or the path to the weights file to be loaded. - include_top
bool
: whether to include the fully-connected layer at the top of the network. - num_classes
int
: optional number of classes to classify images into, only to be specified ifinclude_top
is True, and if noweights
argument is specified.
Returns
A Keras model instance.
Raises
- ValueError: in case of invalid argument for
weights
, or invalid input shape.
QuickNet¶
larq_zoo.sota.QuickNet(
*,
input_shape=None,
input_tensor=None,
weights="imagenet",
include_top=True,
num_classes=1000
)
Instantiates the QuickNet architecture.
Optionally loads weights pre-trained on ImageNet.
Model Summary
+quicknet stats--------------------------------------------------------------------------------------------------+
| Layer Input prec. Outputs # 1-bit # 32-bit Memory 1-bit MACs 32-bit MACs |
| (bit) x 1 x 1 (kB) |
+----------------------------------------------------------------------------------------------------------------+
| input_1 - (-1, 224, 224, 3) 0 0 0 ? ? |
| quant_conv2d - (-1, 112, 112, 16) 0 432 1.69 0 5419008 |
| batch_normalization - (-1, 112, 112, 16) 0 32 0.12 0 0 |
| activation - (-1, 112, 112, 16) 0 0 0 ? ? |
| quant_depthwise_conv2d - (-1, 56, 56, 16) 0 144 0.56 0 451584 |
| batch_normalization_1 - (-1, 56, 56, 16) 0 32 0.12 0 0 |
| quant_conv2d_1 - (-1, 56, 56, 64) 0 1024 4.00 0 3211264 |
| batch_normalization_2 - (-1, 56, 56, 64) 0 128 0.50 0 0 |
| quant_conv2d_2 1 (-1, 56, 56, 64) 36864 0 4.50 115605504 0 |
| batch_normalization_3 - (-1, 56, 56, 64) 0 128 0.50 0 0 |
| tf.__operators__.add - (-1, 56, 56, 64) 0 0 0 ? ? |
| quant_conv2d_3 1 (-1, 56, 56, 64) 36864 0 4.50 115605504 0 |
| batch_normalization_4 - (-1, 56, 56, 64) 0 128 0.50 0 0 |
| tf.__operators__.add_1 - (-1, 56, 56, 64) 0 0 0 ? ? |
| quant_conv2d_4 1 (-1, 56, 56, 64) 36864 0 4.50 115605504 0 |
| batch_normalization_5 - (-1, 56, 56, 64) 0 128 0.50 0 0 |
| tf.__operators__.add_2 - (-1, 56, 56, 64) 0 0 0 ? ? |
| quant_conv2d_5 1 (-1, 56, 56, 64) 36864 0 4.50 115605504 0 |
| batch_normalization_6 - (-1, 56, 56, 64) 0 128 0.50 0 0 |
| tf.__operators__.add_3 - (-1, 56, 56, 64) 0 0 0 ? ? |
| activation_1 - (-1, 56, 56, 64) 0 0 0 ? ? |
| max_pooling2d - (-1, 55, 55, 64) 0 0 0 0 0 |
| depthwise_conv2d - (-1, 28, 28, 64) 0 576 2.25 0 451584 |
| quant_conv2d_6 - (-1, 28, 28, 128) 0 8192 32.00 0 6422528 |
| batch_normalization_7 - (-1, 28, 28, 128) 0 256 1.00 0 0 |
| quant_conv2d_7 1 (-1, 28, 28, 128) 147456 0 18.00 115605504 0 |
| batch_normalization_8 - (-1, 28, 28, 128) 0 256 1.00 0 0 |
| tf.__operators__.add_4 - (-1, 28, 28, 128) 0 0 0 ? ? |
| quant_conv2d_8 1 (-1, 28, 28, 128) 147456 0 18.00 115605504 0 |
| batch_normalization_9 - (-1, 28, 28, 128) 0 256 1.00 0 0 |
| tf.__operators__.add_5 - (-1, 28, 28, 128) 0 0 0 ? ? |
| quant_conv2d_9 1 (-1, 28, 28, 128) 147456 0 18.00 115605504 0 |
| batch_normalization_10 - (-1, 28, 28, 128) 0 256 1.00 0 0 |
| tf.__operators__.add_6 - (-1, 28, 28, 128) 0 0 0 ? ? |
| quant_conv2d_10 1 (-1, 28, 28, 128) 147456 0 18.00 115605504 0 |
| batch_normalization_11 - (-1, 28, 28, 128) 0 256 1.00 0 0 |
| tf.__operators__.add_7 - (-1, 28, 28, 128) 0 0 0 ? ? |
| activation_2 - (-1, 28, 28, 128) 0 0 0 ? ? |
| max_pooling2d_1 - (-1, 27, 27, 128) 0 0 0 0 0 |
| depthwise_conv2d_1 - (-1, 14, 14, 128) 0 1152 4.50 0 225792 |
| quant_conv2d_11 - (-1, 14, 14, 256) 0 32768 128.00 0 6422528 |
| batch_normalization_12 - (-1, 14, 14, 256) 0 512 2.00 0 0 |
| quant_conv2d_12 1 (-1, 14, 14, 256) 589824 0 72.00 115605504 0 |
| batch_normalization_13 - (-1, 14, 14, 256) 0 512 2.00 0 0 |
| tf.__operators__.add_8 - (-1, 14, 14, 256) 0 0 0 ? ? |
| quant_conv2d_13 1 (-1, 14, 14, 256) 589824 0 72.00 115605504 0 |
| batch_normalization_14 - (-1, 14, 14, 256) 0 512 2.00 0 0 |
| tf.__operators__.add_9 - (-1, 14, 14, 256) 0 0 0 ? ? |
| quant_conv2d_14 1 (-1, 14, 14, 256) 589824 0 72.00 115605504 0 |
| batch_normalization_15 - (-1, 14, 14, 256) 0 512 2.00 0 0 |
| tf.__operators__.add_10 - (-1, 14, 14, 256) 0 0 0 ? ? |
| quant_conv2d_15 1 (-1, 14, 14, 256) 589824 0 72.00 115605504 0 |
| batch_normalization_16 - (-1, 14, 14, 256) 0 512 2.00 0 0 |
| tf.__operators__.add_11 - (-1, 14, 14, 256) 0 0 0 ? ? |
| activation_3 - (-1, 14, 14, 256) 0 0 0 ? ? |
| max_pooling2d_2 - (-1, 13, 13, 256) 0 0 0 0 0 |
| depthwise_conv2d_2 - (-1, 7, 7, 256) 0 2304 9.00 0 112896 |
| quant_conv2d_16 - (-1, 7, 7, 512) 0 131072 512.00 0 6422528 |
| batch_normalization_17 - (-1, 7, 7, 512) 0 1024 4.00 0 0 |
| quant_conv2d_17 1 (-1, 7, 7, 512) 2359296 0 288.00 115605504 0 |
| batch_normalization_18 - (-1, 7, 7, 512) 0 1024 4.00 0 0 |
| tf.__operators__.add_12 - (-1, 7, 7, 512) 0 0 0 ? ? |
| quant_conv2d_18 1 (-1, 7, 7, 512) 2359296 0 288.00 115605504 0 |
| batch_normalization_19 - (-1, 7, 7, 512) 0 1024 4.00 0 0 |
| tf.__operators__.add_13 - (-1, 7, 7, 512) 0 0 0 ? ? |
| quant_conv2d_19 1 (-1, 7, 7, 512) 2359296 0 288.00 115605504 0 |
| batch_normalization_20 - (-1, 7, 7, 512) 0 1024 4.00 0 0 |
| tf.__operators__.add_14 - (-1, 7, 7, 512) 0 0 0 ? ? |
| quant_conv2d_20 1 (-1, 7, 7, 512) 2359296 0 288.00 115605504 0 |
| batch_normalization_21 - (-1, 7, 7, 512) 0 1024 4.00 0 0 |
| tf.__operators__.add_15 - (-1, 7, 7, 512) 0 0 0 ? ? |
| activation_4 - (-1, 7, 7, 512) 0 0 0 ? ? |
| average_pooling2d - (-1, 1, 1, 512) 0 0 0 0 0 |
| flatten - (-1, 512) 0 0 0 0 0 |
| quant_dense - (-1, 1000) 0 513000 2003.91 0 512000 |
| activation_5 - (-1, 1000) 0 0 0 ? ? |
+----------------------------------------------------------------------------------------------------------------+
| Total 12533760 700328 4265.66 1849688064 29651712 |
+----------------------------------------------------------------------------------------------------------------+
+quicknet summary-----------------------------+
| Total params 13.2 M |
| Trainable params 13.2 M |
| Non-trainable params 13.7 k |
| Model size 4.17 MiB |
| Model size (8-bit FP weights) 2.16 MiB |
| Float-32 Equivalent 50.48 MiB |
| Compression Ratio of Memory 0.08 |
| Number of MACs 1.88 B |
| Ratio of MACs that are binarized 0.9842 |
+---------------------------------------------+
ImageNet Metrics
Top-1 Accuracy | Top-5 Accuracy | Parameters | Memory |
---|---|---|---|
63.3 % | 84.6 % | 13 234 088 | 4.17 MB |
Arguments
- input_shape
Sequence[int | None] | None
: Optional shape tuple, to be specified if you would like to use a model with an input image resolution that is not (224, 224, 3). It should have exactly 3 inputs channels. - input_tensor
tf.Tensor | keras.engine.keras_tensor.KerasTensor | None
: optional Keras tensor (i.e. output oflayers.Input()
) to use as image input for the model. - weights
str | None
: one ofNone
(random initialization), "imagenet" (pre-training on ImageNet), or the path to the weights file to be loaded. - include_top
bool
: whether to include the fully-connected layer at the top of the network. - num_classes
int
: optional number of classes to classify images into, only to be specified ifinclude_top
is True, and if noweights
argument is specified.
Returns
A Keras model instance.
Raises
- ValueError: in case of invalid argument for
weights
, or invalid input shape.
QuickNetLarge¶
larq_zoo.sota.QuickNetLarge(
*,
input_shape=None,
input_tensor=None,
weights="imagenet",
include_top=True,
num_classes=1000
)
Instantiates the QuickNetLarge architecture.
Optionally loads weights pre-trained on ImageNet.
Model Summary
+quicknet_large stats--------------------------------------------------------------------------------------------+
| Layer Input prec. Outputs # 1-bit # 32-bit Memory 1-bit MACs 32-bit MACs |
| (bit) x 1 x 1 (kB) |
+----------------------------------------------------------------------------------------------------------------+
| input_1 - (-1, 224, 224, 3) 0 0 0 ? ? |
| quant_conv2d - (-1, 112, 112, 16) 0 432 1.69 0 5419008 |
| batch_normalization - (-1, 112, 112, 16) 0 32 0.12 0 0 |
| activation - (-1, 112, 112, 16) 0 0 0 ? ? |
| quant_depthwise_conv2d - (-1, 56, 56, 16) 0 144 0.56 0 451584 |
| batch_normalization_1 - (-1, 56, 56, 16) 0 32 0.12 0 0 |
| quant_conv2d_1 - (-1, 56, 56, 64) 0 1024 4.00 0 3211264 |
| batch_normalization_2 - (-1, 56, 56, 64) 0 128 0.50 0 0 |
| quant_conv2d_2 1 (-1, 56, 56, 64) 36864 0 4.50 115605504 0 |
| batch_normalization_3 - (-1, 56, 56, 64) 0 128 0.50 0 0 |
| tf.__operators__.add - (-1, 56, 56, 64) 0 0 0 ? ? |
| quant_conv2d_3 1 (-1, 56, 56, 64) 36864 0 4.50 115605504 0 |
| batch_normalization_4 - (-1, 56, 56, 64) 0 128 0.50 0 0 |
| tf.__operators__.add_1 - (-1, 56, 56, 64) 0 0 0 ? ? |
| quant_conv2d_4 1 (-1, 56, 56, 64) 36864 0 4.50 115605504 0 |
| batch_normalization_5 - (-1, 56, 56, 64) 0 128 0.50 0 0 |
| tf.__operators__.add_2 - (-1, 56, 56, 64) 0 0 0 ? ? |
| quant_conv2d_5 1 (-1, 56, 56, 64) 36864 0 4.50 115605504 0 |
| batch_normalization_6 - (-1, 56, 56, 64) 0 128 0.50 0 0 |
| tf.__operators__.add_3 - (-1, 56, 56, 64) 0 0 0 ? ? |
| quant_conv2d_6 1 (-1, 56, 56, 64) 36864 0 4.50 115605504 0 |
| batch_normalization_7 - (-1, 56, 56, 64) 0 128 0.50 0 0 |
| tf.__operators__.add_4 - (-1, 56, 56, 64) 0 0 0 ? ? |
| quant_conv2d_7 1 (-1, 56, 56, 64) 36864 0 4.50 115605504 0 |
| batch_normalization_8 - (-1, 56, 56, 64) 0 128 0.50 0 0 |
| tf.__operators__.add_5 - (-1, 56, 56, 64) 0 0 0 ? ? |
| activation_1 - (-1, 56, 56, 64) 0 0 0 ? ? |
| max_pooling2d - (-1, 55, 55, 64) 0 0 0 0 0 |
| depthwise_conv2d - (-1, 28, 28, 64) 0 576 2.25 0 451584 |
| quant_conv2d_8 - (-1, 28, 28, 128) 0 8192 32.00 0 6422528 |
| batch_normalization_9 - (-1, 28, 28, 128) 0 256 1.00 0 0 |
| quant_conv2d_9 1 (-1, 28, 28, 128) 147456 0 18.00 115605504 0 |
| batch_normalization_10 - (-1, 28, 28, 128) 0 256 1.00 0 0 |
| tf.__operators__.add_6 - (-1, 28, 28, 128) 0 0 0 ? ? |
| quant_conv2d_10 1 (-1, 28, 28, 128) 147456 0 18.00 115605504 0 |
| batch_normalization_11 - (-1, 28, 28, 128) 0 256 1.00 0 0 |
| tf.__operators__.add_7 - (-1, 28, 28, 128) 0 0 0 ? ? |
| quant_conv2d_11 1 (-1, 28, 28, 128) 147456 0 18.00 115605504 0 |
| batch_normalization_12 - (-1, 28, 28, 128) 0 256 1.00 0 0 |
| tf.__operators__.add_8 - (-1, 28, 28, 128) 0 0 0 ? ? |
| quant_conv2d_12 1 (-1, 28, 28, 128) 147456 0 18.00 115605504 0 |
| batch_normalization_13 - (-1, 28, 28, 128) 0 256 1.00 0 0 |
| tf.__operators__.add_9 - (-1, 28, 28, 128) 0 0 0 ? ? |
| quant_conv2d_13 1 (-1, 28, 28, 128) 147456 0 18.00 115605504 0 |
| batch_normalization_14 - (-1, 28, 28, 128) 0 256 1.00 0 0 |
| tf.__operators__.add_10 - (-1, 28, 28, 128) 0 0 0 ? ? |
| quant_conv2d_14 1 (-1, 28, 28, 128) 147456 0 18.00 115605504 0 |
| batch_normalization_15 - (-1, 28, 28, 128) 0 256 1.00 0 0 |
| tf.__operators__.add_11 - (-1, 28, 28, 128) 0 0 0 ? ? |
| quant_conv2d_15 1 (-1, 28, 28, 128) 147456 0 18.00 115605504 0 |
| batch_normalization_16 - (-1, 28, 28, 128) 0 256 1.00 0 0 |
| tf.__operators__.add_12 - (-1, 28, 28, 128) 0 0 0 ? ? |
| quant_conv2d_16 1 (-1, 28, 28, 128) 147456 0 18.00 115605504 0 |
| batch_normalization_17 - (-1, 28, 28, 128) 0 256 1.00 0 0 |
| tf.__operators__.add_13 - (-1, 28, 28, 128) 0 0 0 ? ? |
| activation_2 - (-1, 28, 28, 128) 0 0 0 ? ? |
| max_pooling2d_1 - (-1, 27, 27, 128) 0 0 0 0 0 |
| depthwise_conv2d_1 - (-1, 14, 14, 128) 0 1152 4.50 0 225792 |
| quant_conv2d_17 - (-1, 14, 14, 256) 0 32768 128.00 0 6422528 |
| batch_normalization_18 - (-1, 14, 14, 256) 0 512 2.00 0 0 |
| quant_conv2d_18 1 (-1, 14, 14, 256) 589824 0 72.00 115605504 0 |
| batch_normalization_19 - (-1, 14, 14, 256) 0 512 2.00 0 0 |
| tf.__operators__.add_14 - (-1, 14, 14, 256) 0 0 0 ? ? |
| quant_conv2d_19 1 (-1, 14, 14, 256) 589824 0 72.00 115605504 0 |
| batch_normalization_20 - (-1, 14, 14, 256) 0 512 2.00 0 0 |
| tf.__operators__.add_15 - (-1, 14, 14, 256) 0 0 0 ? ? |
| quant_conv2d_20 1 (-1, 14, 14, 256) 589824 0 72.00 115605504 0 |
| batch_normalization_21 - (-1, 14, 14, 256) 0 512 2.00 0 0 |
| tf.__operators__.add_16 - (-1, 14, 14, 256) 0 0 0 ? ? |
| quant_conv2d_21 1 (-1, 14, 14, 256) 589824 0 72.00 115605504 0 |
| batch_normalization_22 - (-1, 14, 14, 256) 0 512 2.00 0 0 |
| tf.__operators__.add_17 - (-1, 14, 14, 256) 0 0 0 ? ? |
| quant_conv2d_22 1 (-1, 14, 14, 256) 589824 0 72.00 115605504 0 |
| batch_normalization_23 - (-1, 14, 14, 256) 0 512 2.00 0 0 |
| tf.__operators__.add_18 - (-1, 14, 14, 256) 0 0 0 ? ? |
| quant_conv2d_23 1 (-1, 14, 14, 256) 589824 0 72.00 115605504 0 |
| batch_normalization_24 - (-1, 14, 14, 256) 0 512 2.00 0 0 |
| tf.__operators__.add_19 - (-1, 14, 14, 256) 0 0 0 ? ? |
| quant_conv2d_24 1 (-1, 14, 14, 256) 589824 0 72.00 115605504 0 |
| batch_normalization_25 - (-1, 14, 14, 256) 0 512 2.00 0 0 |
| tf.__operators__.add_20 - (-1, 14, 14, 256) 0 0 0 ? ? |
| quant_conv2d_25 1 (-1, 14, 14, 256) 589824 0 72.00 115605504 0 |
| batch_normalization_26 - (-1, 14, 14, 256) 0 512 2.00 0 0 |
| tf.__operators__.add_21 - (-1, 14, 14, 256) 0 0 0 ? ? |
| quant_conv2d_26 1 (-1, 14, 14, 256) 589824 0 72.00 115605504 0 |
| batch_normalization_27 - (-1, 14, 14, 256) 0 512 2.00 0 0 |
| tf.__operators__.add_22 - (-1, 14, 14, 256) 0 0 0 ? ? |
| quant_conv2d_27 1 (-1, 14, 14, 256) 589824 0 72.00 115605504 0 |
| batch_normalization_28 - (-1, 14, 14, 256) 0 512 2.00 0 0 |
| tf.__operators__.add_23 - (-1, 14, 14, 256) 0 0 0 ? ? |
| quant_conv2d_28 1 (-1, 14, 14, 256) 589824 0 72.00 115605504 0 |
| batch_normalization_29 - (-1, 14, 14, 256) 0 512 2.00 0 0 |
| tf.__operators__.add_24 - (-1, 14, 14, 256) 0 0 0 ? ? |
| quant_conv2d_29 1 (-1, 14, 14, 256) 589824 0 72.00 115605504 0 |
| batch_normalization_30 - (-1, 14, 14, 256) 0 512 2.00 0 0 |
| tf.__operators__.add_25 - (-1, 14, 14, 256) 0 0 0 ? ? |
| activation_3 - (-1, 14, 14, 256) 0 0 0 ? ? |
| max_pooling2d_2 - (-1, 13, 13, 256) 0 0 0 0 0 |
| depthwise_conv2d_2 - (-1, 7, 7, 256) 0 2304 9.00 0 112896 |
| quant_conv2d_30 - (-1, 7, 7, 512) 0 131072 512.00 0 6422528 |
| batch_normalization_31 - (-1, 7, 7, 512) 0 1024 4.00 0 0 |
| quant_conv2d_31 1 (-1, 7, 7, 512) 2359296 0 288.00 115605504 0 |
| batch_normalization_32 - (-1, 7, 7, 512) 0 1024 4.00 0 0 |
| tf.__operators__.add_26 - (-1, 7, 7, 512) 0 0 0 ? ? |
| quant_conv2d_32 1 (-1, 7, 7, 512) 2359296 0 288.00 115605504 0 |
| batch_normalization_33 - (-1, 7, 7, 512) 0 1024 4.00 0 0 |
| tf.__operators__.add_27 - (-1, 7, 7, 512) 0 0 0 ? ? |
| quant_conv2d_33 1 (-1, 7, 7, 512) 2359296 0 288.00 115605504 0 |
| batch_normalization_34 - (-1, 7, 7, 512) 0 1024 4.00 0 0 |
| tf.__operators__.add_28 - (-1, 7, 7, 512) 0 0 0 ? ? |
| quant_conv2d_34 1 (-1, 7, 7, 512) 2359296 0 288.00 115605504 0 |
| batch_normalization_35 - (-1, 7, 7, 512) 0 1024 4.00 0 0 |
| tf.__operators__.add_29 - (-1, 7, 7, 512) 0 0 0 ? ? |
| quant_conv2d_35 1 (-1, 7, 7, 512) 2359296 0 288.00 115605504 0 |
| batch_normalization_36 - (-1, 7, 7, 512) 0 1024 4.00 0 0 |
| tf.__operators__.add_30 - (-1, 7, 7, 512) 0 0 0 ? ? |
| quant_conv2d_36 1 (-1, 7, 7, 512) 2359296 0 288.00 115605504 0 |
| batch_normalization_37 - (-1, 7, 7, 512) 0 1024 4.00 0 0 |
| tf.__operators__.add_31 - (-1, 7, 7, 512) 0 0 0 ? ? |
| activation_4 - (-1, 7, 7, 512) 0 0 0 ? ? |
| average_pooling2d - (-1, 1, 1, 512) 0 0 0 0 0 |
| flatten - (-1, 512) 0 0 0 0 0 |
| quant_dense - (-1, 1000) 0 513000 2003.91 0 512000 |
| activation_5 - (-1, 1000) 0 0 0 ? ? |
+----------------------------------------------------------------------------------------------------------------+
| Total 22634496 707752 5527.66 3699376128 29651712 |
+----------------------------------------------------------------------------------------------------------------+
+quicknet_large summary-----------------------+
| Total params 23.3 M |
| Trainable params 23.3 M |
| Non-trainable params 21.1 k |
| Model size 5.40 MiB |
| Model size (8-bit FP weights) 3.37 MiB |
| Float-32 Equivalent 89.04 MiB |
| Compression Ratio of Memory 0.06 |
| Number of MACs 3.73 B |
| Ratio of MACs that are binarized 0.9920 |
+---------------------------------------------+
ImageNet Metrics
Top-1 Accuracy | Top-5 Accuracy | Parameters | Memory |
---|---|---|---|
66.9 % | 87.0 % | 23 342 248 | 5.40 MB |
Arguments
- input_shape
Sequence[int | None] | None
: Optional shape tuple, to be specified if you would like to use a model with an input image resolution that is not (224, 224, 3). It should have exactly 3 inputs channels. - input_tensor
tf.Tensor | keras.engine.keras_tensor.KerasTensor | None
: optional Keras tensor (i.e. output oflayers.Input()
) to use as image input for the model. - weights
str | None
: one ofNone
(random initialization), "imagenet" (pre-training on ImageNet), or the path to the weights file to be loaded. - include_top
bool
: whether to include the fully-connected layer at the top of the network. - num_classes
int
: optional number of classes to classify images into, only to be specified ifinclude_top
is True, and if noweights
argument is specified.
Returns
A Keras model instance.
Raises
- ValueError: in case of invalid argument for
weights
, or invalid input shape.