{"id":1868,"date":"2023-07-03T04:48:33","date_gmt":"2023-07-03T01:48:33","guid":{"rendered":"https:\/\/aivolga.com\/?page_id=1868"},"modified":"2023-08-08T08:44:53","modified_gmt":"2023-08-08T05:44:53","slug":"compute-in-memory","status":"publish","type":"page","link":"https:\/\/aivolga.com\/?page_id=1868","title":{"rendered":"Compute-in-Memory"},"content":{"rendered":"\t\t<div data-elementor-type=\"wp-page\" data-elementor-id=\"1868\" class=\"elementor elementor-1868\">\n\t\t\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-6dd091be elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"6dd091be\" data-element_type=\"section\" data-e-type=\"section\" data-settings=\"{&quot;background_background&quot;:&quot;classic&quot;,&quot;shape_divider_bottom&quot;:&quot;tilt&quot;,&quot;ekit_has_onepagescroll_dot&quot;:&quot;yes&quot;}\">\n\t\t\t\t\t\t\t<div class=\"elementor-background-overlay\"><\/div>\n\t\t\t\t\t\t<div class=\"elementor-shape elementor-shape-bottom\" aria-hidden=\"true\" data-negative=\"false\">\n\t\t\t<svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 1000 100\" preserveAspectRatio=\"none\">\n\t<path class=\"elementor-shape-fill\" d=\"M0,6V0h1000v100L0,6z\"\/>\n<\/svg>\t\t<\/div>\n\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-698300c3 elementor-invisible\" data-id=\"698300c3\" data-element_type=\"column\" data-e-type=\"column\" data-settings=\"{&quot;animation&quot;:&quot;fadeInLeft&quot;}\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-67b6d351 elementor-widget elementor-widget-heading\" data-id=\"67b6d351\" data-element_type=\"widget\" data-e-type=\"widget\" data-settings=\"{&quot;ekit_we_effect_on&quot;:&quot;none&quot;}\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h1 class=\"elementor-heading-title elementor-size-default\"><span style=\"color:#12b39b\">Compute-in-Memory<\/span><br>Boosting memory capacity and processing speed<\/h1>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<section class=\"elementor-section elementor-inner-section elementor-element elementor-element-60c6d07 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"60c6d07\" data-element_type=\"section\" data-e-type=\"section\" data-settings=\"{&quot;ekit_has_onepagescroll_dot&quot;:&quot;yes&quot;}\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-50 elementor-inner-column elementor-element elementor-element-d8245ea\" data-id=\"d8245ea\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-1bac837 elementor-widget elementor-widget-image\" data-id=\"1bac837\" data-element_type=\"widget\" data-e-type=\"widget\" data-settings=\"{&quot;ekit_we_effect_on&quot;:&quot;none&quot;}\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img fetchpriority=\"high\" decoding=\"async\" width=\"588\" height=\"393\" src=\"https:\/\/aivolga.com\/wp-content\/uploads\/2023\/07\/Compute_In_Memory_IC.png\" class=\"attachment-large size-large wp-image-1803\" alt=\"\" srcset=\"https:\/\/aivolga.com\/wp-content\/uploads\/2023\/07\/Compute_In_Memory_IC.png 588w, https:\/\/aivolga.com\/wp-content\/uploads\/2023\/07\/Compute_In_Memory_IC-300x201.png 300w\" sizes=\"(max-width: 588px) 100vw, 588px\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t<div class=\"elementor-column elementor-col-50 elementor-inner-column elementor-element elementor-element-d243f7f\" data-id=\"d243f7f\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-ac41ed1 elementor-widget elementor-widget-text-editor\" data-id=\"ac41ed1\" data-element_type=\"widget\" data-e-type=\"widget\" data-settings=\"{&quot;ekit_we_effect_on&quot;:&quot;none&quot;}\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tToday\u2019s most common computing architectures are built on assumptions about how memory is accessed and used. These systems assume that the full memory space is too large to fit on-chip near the processor, and that we do not know what memory will be needed at what time. To address the space issue and the uncertainty issue, these architectures build a hierarchy of memory. The memory hierarchy near the CPU is small and fast and can support high frequency of use, while DRAM and SSD are large enough to store the bulkier, less time-sensitive data.\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-24d507a elementor-widget elementor-widget-image\" data-id=\"24d507a\" data-element_type=\"widget\" data-e-type=\"widget\" data-settings=\"{&quot;ekit_we_effect_on&quot;:&quot;none&quot;}\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" width=\"800\" height=\"185\" src=\"https:\/\/aivolga.com\/wp-content\/uploads\/2023\/07\/sca-1024x237.png\" class=\"attachment-large size-large wp-image-1871\" alt=\"\" srcset=\"https:\/\/aivolga.com\/wp-content\/uploads\/2023\/07\/sca-1024x237.png 1024w, https:\/\/aivolga.com\/wp-content\/uploads\/2023\/07\/sca-300x70.png 300w, https:\/\/aivolga.com\/wp-content\/uploads\/2023\/07\/sca-768x178.png 768w, https:\/\/aivolga.com\/wp-content\/uploads\/2023\/07\/sca.png 1372w\" sizes=\"(max-width: 800px) 100vw, 800px\" \/>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<div class=\"elementor-element elementor-element-59c051c elementor-widget elementor-widget-text-editor\" data-id=\"59c051c\" data-element_type=\"widget\" data-e-type=\"widget\" data-settings=\"{&quot;ekit_we_effect_on&quot;:&quot;none&quot;}\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\tCompute-in-memory is built using different assumptions: we have a large amount of data that we need to access but we know exactly when we will need it. These assumptions are possible for AI inference applications because the execution flow of the neural network is deterministic \u2013 it is not dependent on the input data like many other applications. Using that knowledge, we can strategically control the location of data in memory, instead of building a cache hierarchy to cover for our lack of knowledge. Compute-in-memory also adds local compute to each memory array, allowing it to process the data directly next to each memory. By having compute next to each memory array, we can have an enormous memory that has the same performance and efficiency as L1 cache (or even register files).\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-0e4f5e4 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"0e4f5e4\" data-element_type=\"section\" data-e-type=\"section\" data-settings=\"{&quot;background_background&quot;:&quot;classic&quot;,&quot;ekit_has_onepagescroll_dot&quot;:&quot;yes&quot;}\">\n\t\t\t\t\t\t\t<div class=\"elementor-background-overlay\"><\/div>\n\t\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-238afa6\" data-id=\"238afa6\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap\">\n\t\t\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<\/div>\n\t\t","protected":false},"excerpt":{"rendered":"<p>Compute-in-MemoryBoosting memory capacity and processing speed Today\u2019s most common computing architectures are built on assumptions about how memory is accessed and used. These systems assume that the full memory space is too large to fit on-chip near the processor, and that we do not know what memory will be needed at what time. To address [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":887,"menu_order":3,"comment_status":"closed","ping_status":"closed","template":"elementor_header_footer","meta":{"footnotes":""},"class_list":["post-1868","page","type-page","status-publish","hentry"],"lang":"en","translations":{"en":1868,"ru":1967},"pll_sync_post":[],"_links":{"self":[{"href":"https:\/\/aivolga.com\/index.php?rest_route=\/wp\/v2\/pages\/1868","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/aivolga.com\/index.php?rest_route=\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/aivolga.com\/index.php?rest_route=\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/aivolga.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/aivolga.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1868"}],"version-history":[{"count":23,"href":"https:\/\/aivolga.com\/index.php?rest_route=\/wp\/v2\/pages\/1868\/revisions"}],"predecessor-version":[{"id":3241,"href":"https:\/\/aivolga.com\/index.php?rest_route=\/wp\/v2\/pages\/1868\/revisions\/3241"}],"up":[{"embeddable":true,"href":"https:\/\/aivolga.com\/index.php?rest_route=\/wp\/v2\/pages\/887"}],"wp:attachment":[{"href":"https:\/\/aivolga.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1868"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}